id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
mteb/sts15-sts | 2022-09-27T19:12:14.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 1,902 | ---
language:
- en
--- |
codeparrot/github-code | 2022-10-20T15:01:14.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:other",
"region:us"
] | codeparrot | The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the GitHub dataset on BiqQuery. | null | null | 169 | 1,899 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Code Dataset
## Dataset Description
The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
print(next(iter(ds))["code"])
#OUTPUT:
"""\
FROM rockyluke/ubuntu:precise
ENV DEBIAN_FRONTEND="noninteractive" \
TZ="Europe/Amsterdam"
...
"""
```
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
licenses = []
for element in iter(ds).take(10_000):
licenses.append(element["license"])
print(Counter(licenses))
#OUTPUT:
Counter({'mit': 9896, 'isc': 104})
```
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
```python
ds = load_dataset("codeparrot/github-code", split="train")
```
## Data Structure
### Data Instances
```python
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code|string|content of source file|
|repo_name|string|name of the GitHub repository|
|path|string|path of file in GitHub repository|
|language|string|programming language as inferred by extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
The dataset only contains a train split.
## Languages
The dataset contains 30 programming languages with over 60 extensions:
```python
{
"Assembly": [".asm"],
"Batchfile": [".bat", ".cmd"],
"C": [".c", ".h"],
"C#": [".cs"],
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
"CMake": [".cmake"],
"CSS": [".css"],
"Dockerfile": [".dockerfile", "Dockerfile"],
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
"GO": [".go"],
"Haskell": [".hs"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Julia": [".jl"],
"Lua": [".lua"],
"Makefile": ["Makefile"],
"Markdown": [".md", ".markdown"],
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
"Perl": [".pl", ".pm", ".pod", ".perl"],
"PowerShell": ['.ps1', '.psd1', '.psm1'],
"Python": [".py"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Scala": [".scala"],
"Shell": [".sh", ".bash", ".command", ".zsh"],
"TypeScript": [".ts", ".tsx"],
"TeX": [".tex"],
"Visual Basic": [".vb"]
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:

| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.70 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.70 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.70 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 |
## Dataset Creation
The dataset was created in two steps:
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
## Considerations for Using the Data
The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
## Releases
You can load any older version of the dataset with the `revision` argument:
```Python
ds = load_dataset("codeparrot/github-code", revision="v1.0")
```
### v1.0
- Initial release of dataset
- The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_
### v1.1
- Fix missing Scala/TypeScript
- Fix deduplication issue with inconsistent Python `hash`
- The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
|
vicgalle/alpaca-gpt4 | 2023-09-26T18:51:15.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"gpt4",
"alpaca",
"instruction-finetuning",
"arxiv:2304.03277",
"region:us"
] | vicgalle | null | null | null | 98 | 1,897 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88566301
num_examples: 52002
download_size: 48393562
dataset_size: 88566301
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: cc-by-nc-4.0
tags:
- gpt4
- alpaca
- instruction-finetuning
---
# Dataset Card for "alpaca-gpt4"
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
HuggingFaceM4/VQAv2 | 2022-06-30T13:15:04.000Z | [
"region:us"
] | HuggingFaceM4 | VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer. | @InProceedings{VQA,
author = {Stanislaw Antol and Aishwarya Agrawal and Jiasen Lu and Margaret Mitchell and Dhruv Batra and C. Lawrence Zitnick and Devi Parikh},
title = {VQA: Visual Question Answering},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2015},
} | null | 6 | 1,893 | Checks with https://visualqa.org/download.html:
- Num train questions: 443,757
- Num val questions: 214,354
- Num test questions: 447,793
- Num train answers: 4,437,570
- Num val answers: 2,143,540
- Num train images: 82,783
- Num val images: 40,504
- Num test images: 81,434
testdev is not mentionned:
- Num questions: 107,394
- Num images: 36,807 |
HuggingFaceH4/mt_bench_prompts | 2023-07-03T20:52:34.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"evaluation",
"arxiv:2306.05685",
"region:us"
] | HuggingFaceH4 | null | null | null | 2 | 1,883 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- evaluation
pretty_name: MT Bench
size_categories:
- n<1K
---
# MT Bench by LMSYS
This set of evaluation prompts is created by the [LMSYS org](https://huggingface.co/lmsys) for better evaluation of chat models.
For more information, see the [paper](https://arxiv.org/abs/2306.05685).
### Dataset loading
To load this dataset, use 🤗 datasets:
```python
from datasets import load_dataset
data = load_dataset(HuggingFaceH4/mt_bench_prompts, split="train")
```
### Dataset creation
To create the dataset, we do the following for our internal tooling.
* rename `turns` to `prompts`,
* add empty `reference` to remaining prompts (for HF Datasets),
* Use the following code to load and save as a dataset
```python
from datasets import load_dataset
import hashlib
data = load_dataset("json", data_files="https://huggingface.co/datasets/HuggingFaceH4/mt_bench_prompts/raw/main/raw/question.jsonl", split="train")
# %% create_dataset.ipynb 11
def format_example(example):
return {
"prompt": example["prompt"],
"prompt_id": int(hashlib.sha256(''.join(example["prompt"]).encode("utf-8")).hexdigest(), 16) % (10 ** 8),
"category": example["category"],
"reference": example["reference"],
}
formatted_ds = data.map(format_example, num_proc=6, remove_columns=data.column_names)
#
formatted_ds.push_to_hub("HuggingFaceH4/mt_bench_prompts", split="train")
``` |
dart | 2022-11-18T19:57:00.000Z | [
"task_categories:tabular-to-text",
"task_ids:rdf-to-text",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikitable_questions",
"source_datasets:extended|wikisql",
"source_datasets:extended|web_nlg",
"source_datasets:extended|cleaned_e2e",
"language:en",
"license:mit",
"arxiv:2007.02871",
"region:us"
] | null | DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality
sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.
It consists of 82191 examples across different domains with each input being a semantic RDF triple set derived
from data records in tables and the tree ontology of table schema, annotated with sentence description that
covers all facts in the triple set.
DART is released in the following paper where you can find more details and baseline results:
https://arxiv.org/abs/2007.02871 | @article{radev2020dart,
title={DART: Open-Domain Structured Data Record to Text Generation},
author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:2007.02871},
year={2020} | null | 3 | 1,874 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikitable_questions
- extended|wikisql
- extended|web_nlg
- extended|cleaned_e2e
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: dart
pretty_name: DART
dataset_info:
features:
- name: tripleset
sequence:
sequence: string
- name: subtree_was_extended
dtype: bool
- name: annotations
sequence:
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12966443
num_examples: 30526
- name: validation
num_bytes: 1458106
num_examples: 2768
- name: test
num_bytes: 2657644
num_examples: 5097
download_size: 29939366
dataset_size: 17082193
---
# Dataset Card for DART
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepahe](https://github.com/Yale-LILY/dart)
- **Repository:** [github](https://github.com/Yale-LILY/dart)
- **Paper:** [paper](https://arxiv.org/abs/2007.02871)
- **Leaderboard:** [leaderboard](https://github.com/Yale-LILY/dart#leaderboard)
### Dataset Summary
DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.
### Supported Tasks and Leaderboards
The task associated to DART is text generation from data records that are RDF triplets:
- `rdf-to-text`: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [METEOR](https://huggingface.co/metrics/meteor), [BLEURT](https://huggingface.co/metrics/bleurt), [TER](https://huggingface.co/metrics/ter), [MoverScore](https://huggingface.co/metrics/mover_score), and [BERTScore](https://huggingface.co/metrics/bert_score). The ([BART-large model](https://huggingface.co/facebook/bart-large) from [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following scores:
| | BLEU | METEOR | TER | MoverScore | BERTScore | BLEURT |
| ----- | ----- | ------ | ---- | ----------- | ---------- | ------ |
| BART | 37.06 | 0.36 | 0.57 | 0.44 | 0.92 | 0.22 |
This task has an active leaderboard which can be found [here](https://github.com/Yale-LILY/dart#leaderboard) and ranks models based on the above metrics while also reporting.
### Languages
The dataset is in english (en).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{'annotations': {'source': ['WikiTableQuestions_mturk'],
'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']},
'subtree_was_extended': False,
'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'],
['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]}
```
It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.
### Data Fields
The different fields are:
- `annotations`:
- `text`: list of text descriptions of the triplets
- `source`: list of sources of the RDF triplets (WikiTable, e2e, etc.)
- `subtree_was_extended`: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to `None`
- `tripleset`: RDF triplets as a list of triplets of strings (subject, predicate, object)
### Data Splits
There are three splits, train, validation and test:
| | train | validation | test |
| ----- |------:|-----------:|-----:|
| N. Examples | 30526 | 2768 | 6959 |
## Dataset Creation
### Curation Rationale
Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.
### Source Data
DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.
#### Initial Data Collection and Normalization
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
#### Who are the source language producers?
[More Information Needed]
### Annotations
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
#### Annotation process
The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table.
First, internal skilled annotators denote the parent column for each column header.
Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Under MIT license (see [here](https://github.com/Yale-LILY/dart/blob/master/LICENSE))
### Citation Information
```
@article{radev2020dart,
title={DART: Open-Domain Structured Data Record to Text Generation},
author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:2007.02871},
year={2020}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
wiki_bio | 2022-11-18T22:00:08.000Z | [
"task_categories:table-to-text",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:1603.07771",
"region:us"
] | null | This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation
algorithms. For each article, we provide the first paragraph and the infobox (both tokenized).
For each article, we extracted the first paragraph (text), the infobox (structured data). Each
infobox is encoded as a list of (field name, field value) pairs. We used Stanford CoreNLP
(http://stanfordnlp.github.io/CoreNLP/) to preprocess the data, i.e. we broke the text into
sentences and tokenized both the text and the field values. The dataset was randomly split in
three subsets train (80%), valid (10%), test (10%). | @article{DBLP:journals/corr/LebretGA16,
author = {R{\'{e}}mi Lebret and
David Grangier and
Michael Auli},
title = {Generating Text from Structured Data with Application to the Biography
Domain},
journal = {CoRR},
volume = {abs/1603.07771},
year = {2016},
url = {http://arxiv.org/abs/1603.07771},
archivePrefix = {arXiv},
eprint = {1603.07771},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 10 | 1,873 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
paperswithcode_id: wikibio
pretty_name: WikiBio
dataset_info:
features:
- name: input_text
struct:
- name: table
sequence:
- name: column_header
dtype: string
- name: row_number
dtype: int16
- name: content
dtype: string
- name: context
dtype: string
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 619269257
num_examples: 582659
- name: test
num_bytes: 77264695
num_examples: 72831
- name: val
num_bytes: 77335069
num_examples: 72831
download_size: 333998704
dataset_size: 773869021
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/DavidGrangier/wikipedia-biography-dataset
- **Paper:** https://arxiv.org/pdf/1603.07771.pdf
- **GitHub:** https://github.com/DavidGrangier/wikipedia-biography-dataset
### Dataset Summary
This Dataset contains 728321 biographies extracted from Wikipedia containing the first paragraph of the biography and the tabular infobox.
### Supported Tasks and Leaderboards
The main purpose of this dataset is developing text generation models.
### Languages
English.
## Dataset Structure
### Data Instances
More Information Needed
### Data Fields
The structure of a single sample is the following:
```json
{
"input_text":{
"context":"pope michael iii of alexandria\n",
"table":{
"column_header":[
"type",
"ended",
"death_date",
"title",
"enthroned",
"name",
"buried",
"religion",
"predecessor",
"nationality",
"article_title",
"feast_day",
"birth_place",
"residence",
"successor"
],
"content":[
"pope",
"16 march 907",
"16 march 907",
"56th of st. mark pope of alexandria & patriarch of the see",
"25 april 880",
"michael iii of alexandria",
"monastery of saint macarius the great",
"coptic orthodox christian",
"shenouda i",
"egyptian",
"pope michael iii of alexandria\n",
"16 -rrb- march -lrb- 20 baramhat in the coptic calendar",
"egypt",
"saint mark 's church",
"gabriel i"
],
"row_number":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
}
},
"target_text":"pope michael iii of alexandria -lrb- also known as khail iii -rrb- was the coptic pope of alexandria and patriarch of the see of st. mark -lrb- 880 -- 907 -rrb- .\nin 882 , the governor of egypt , ahmad ibn tulun , forced khail to pay heavy contributions , forcing him to sell a church and some attached properties to the local jewish community .\nthis building was at one time believed to have later become the site of the cairo geniza .\n"
}
```
where, in the `"table"` field, all the information of the Wikpedia infobox is stored (the header of the infobox is stored in `"column_header"` and the information in the `"content"` field).
### Data Splits
- Train: 582659 samples.
- Test: 72831 samples.
- Validation: 72831 samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
This dataset was announced in the paper <em>Neural Text Generation from Structured Data with Application to the Biography Domain</em> [(arxiv link)](https://arxiv.org/pdf/1603.07771.pdf) and is stored in [this](https://github.com/DavidGrangier/wikipedia-biography-dataset) repo (owned by DavidGrangier).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is ditributed under Creative Comons CC BY-SA 3.0 License.
### Citation Information
For refering the original paper in BibTex format:
```
@article{DBLP:journals/corr/LebretGA16,
author = {R{\'{e}}mi Lebret and
David Grangier and
Michael Auli},
title = {Generating Text from Structured Data with Application to the Biography
Domain},
journal = {CoRR},
volume = {abs/1603.07771},
year = {2016},
url = {http://arxiv.org/abs/1603.07771},
archivePrefix = {arXiv},
eprint = {1603.07771},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@alejandrocros](https://github.com/alejandrocros) for adding this dataset. |
wikicorpus | 2023-06-01T14:59:54.000Z | [
"task_categories:fill-mask",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:en",
"language:es",
"license:gfdl",
"word-sense-disambiguation",
"lemmatization",
"region:us"
] | null | The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words. | @inproceedings{reese-etal-2010-wikicorpus,
title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
author = "Reese, Samuel and
Boleda, Gemma and
Cuadros, Montse and
Padr{\'o}, Llu{\'i}s and
Rigau, German",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf",
abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.",
} | null | 5 | 1,868 | ---
pretty_name: Wikicorpus
annotations_creators:
- machine-generated
- no-annotation
language_creators:
- found
language:
- ca
- en
- es
license:
- gfdl
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
- text-classification
- text-generation
- token-classification
task_ids:
- language-modeling
- masked-language-modeling
- part-of-speech
paperswithcode_id: null
tags:
- word-sense-disambiguation
- lemmatization
dataset_info:
- config_name: raw_ca
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 263170192
num_examples: 143883
download_size: 96437841
dataset_size: 263170192
- config_name: raw_es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 671295359
num_examples: 259409
download_size: 252926918
dataset_size: 671295359
- config_name: raw_en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3388801074
num_examples: 1359146
download_size: 1346378932
dataset_size: 3388801074
- config_name: tagged_ca
features:
- name: id
dtype: string
- name: title
dtype: string
- name: sentence
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence: string
- name: wordnet_senses
sequence: string
splits:
- name: train
num_bytes: 1666129919
num_examples: 2016221
download_size: 226390380
dataset_size: 1666129919
- config_name: tagged_es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: sentence
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence: string
- name: wordnet_senses
sequence: string
splits:
- name: train
num_bytes: 4100040390
num_examples: 5039367
download_size: 604910899
dataset_size: 4100040390
- config_name: tagged_en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: sentence
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence: string
- name: wordnet_senses
sequence: string
splits:
- name: train
num_bytes: 18077275300
num_examples: 26350272
download_size: 2477450893
dataset_size: 18077275300
config_names:
- raw_ca
- raw_en
- raw_es
- tagged_ca
- tagged_en
- tagged_es
---
# Dataset Card for Wikicorpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.upc.edu/~nlp/wikicorpus/
- **Repository:**
- **Paper:** https://www.cs.upc.edu/~nlp/papers/reese10.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words.
The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Each sub-dataset is monolingual in the languages:
- ca: Catalan
- en: English
- es: Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The WikiCorpus is licensed under the same license as Wikipedia, that is, the [GNU Free Documentation License](http://www.fsf.org/licensing/licenses/fdl.html)
### Citation Information
```
@inproceedings{reese-etal-2010-wikicorpus,
title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
author = "Reese, Samuel and
Boleda, Gemma and
Cuadros, Montse and
Padr{\'o}, Llu{\'i}s and
Rigau, German",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf",
abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
scene_parse_150 | 2023-01-25T14:43:32.000Z | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|ade20k",
"language:en",
"license:bsd-3-clause",
"scene-parsing",
"arxiv:1608.05442",
"region:us"
] | null | Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed.
MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing.
The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts.
Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing.
There are totally 150 semantic categories included for evaluation, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.
Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene. | @inproceedings{zhou2017scene,
title={Scene Parsing through ADE20K Dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2017}
}
@article{zhou2016semantic,
title={Semantic understanding of scenes through the ade20k dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
journal={arXiv preprint arXiv:1608.05442},
year={2016}
} | null | 11 | 1,861 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- en
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|ade20k
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
paperswithcode_id: ade20k
pretty_name: MIT Scene Parsing Benchmark
tags:
- scene-parsing
dataset_info:
- config_name: scene_parsing
features:
- name: image
dtype: image
- name: annotation
dtype: image
- name: scene_category
dtype:
class_label:
names:
'0': airport_terminal
'1': art_gallery
'2': badlands
'3': ball_pit
'4': bathroom
'5': beach
'6': bedroom
'7': booth_indoor
'8': botanical_garden
'9': bridge
'10': bullring
'11': bus_interior
'12': butte
'13': canyon
'14': casino_outdoor
'15': castle
'16': church_outdoor
'17': closet
'18': coast
'19': conference_room
'20': construction_site
'21': corral
'22': corridor
'23': crosswalk
'24': day_care_center
'25': sand
'26': elevator_interior
'27': escalator_indoor
'28': forest_road
'29': gangplank
'30': gas_station
'31': golf_course
'32': gymnasium_indoor
'33': harbor
'34': hayfield
'35': heath
'36': hoodoo
'37': house
'38': hunting_lodge_outdoor
'39': ice_shelf
'40': joss_house
'41': kiosk_indoor
'42': kitchen
'43': landfill
'44': library_indoor
'45': lido_deck_outdoor
'46': living_room
'47': locker_room
'48': market_outdoor
'49': mountain_snowy
'50': office
'51': orchard
'52': arbor
'53': bookshelf
'54': mews
'55': nook
'56': preserve
'57': traffic_island
'58': palace
'59': palace_hall
'60': pantry
'61': patio
'62': phone_booth
'63': establishment
'64': poolroom_home
'65': quonset_hut_outdoor
'66': rice_paddy
'67': sandbox
'68': shopfront
'69': skyscraper
'70': stone_circle
'71': subway_interior
'72': platform
'73': supermarket
'74': swimming_pool_outdoor
'75': television_studio
'76': indoor_procenium
'77': train_railway
'78': coral_reef
'79': viaduct
'80': wave
'81': wind_farm
'82': bottle_storage
'83': abbey
'84': access_road
'85': air_base
'86': airfield
'87': airlock
'88': airplane_cabin
'89': airport
'90': entrance
'91': airport_ticket_counter
'92': alcove
'93': alley
'94': amphitheater
'95': amusement_arcade
'96': amusement_park
'97': anechoic_chamber
'98': apartment_building_outdoor
'99': apse_indoor
'100': apse_outdoor
'101': aquarium
'102': aquatic_theater
'103': aqueduct
'104': arcade
'105': arch
'106': archaelogical_excavation
'107': archive
'108': basketball
'109': football
'110': hockey
'111': performance
'112': rodeo
'113': soccer
'114': armory
'115': army_base
'116': arrival_gate_indoor
'117': arrival_gate_outdoor
'118': art_school
'119': art_studio
'120': artists_loft
'121': assembly_line
'122': athletic_field_indoor
'123': athletic_field_outdoor
'124': atrium_home
'125': atrium_public
'126': attic
'127': auditorium
'128': auto_factory
'129': auto_mechanics_indoor
'130': auto_mechanics_outdoor
'131': auto_racing_paddock
'132': auto_showroom
'133': backstage
'134': backstairs
'135': badminton_court_indoor
'136': badminton_court_outdoor
'137': baggage_claim
'138': shop
'139': exterior
'140': balcony_interior
'141': ballroom
'142': bamboo_forest
'143': bank_indoor
'144': bank_outdoor
'145': bank_vault
'146': banquet_hall
'147': baptistry_indoor
'148': baptistry_outdoor
'149': bar
'150': barbershop
'151': barn
'152': barndoor
'153': barnyard
'154': barrack
'155': baseball_field
'156': basement
'157': basilica
'158': basketball_court_indoor
'159': basketball_court_outdoor
'160': bathhouse
'161': batters_box
'162': batting_cage_indoor
'163': batting_cage_outdoor
'164': battlement
'165': bayou
'166': bazaar_indoor
'167': bazaar_outdoor
'168': beach_house
'169': beauty_salon
'170': bedchamber
'171': beer_garden
'172': beer_hall
'173': belfry
'174': bell_foundry
'175': berth
'176': berth_deck
'177': betting_shop
'178': bicycle_racks
'179': bindery
'180': biology_laboratory
'181': bistro_indoor
'182': bistro_outdoor
'183': bleachers_indoor
'184': bleachers_outdoor
'185': boardwalk
'186': boat_deck
'187': boathouse
'188': bog
'189': bomb_shelter_indoor
'190': bookbindery
'191': bookstore
'192': bow_window_indoor
'193': bow_window_outdoor
'194': bowling_alley
'195': box_seat
'196': boxing_ring
'197': breakroom
'198': brewery_indoor
'199': brewery_outdoor
'200': brickyard_indoor
'201': brickyard_outdoor
'202': building_complex
'203': building_facade
'204': bullpen
'205': burial_chamber
'206': bus_depot_indoor
'207': bus_depot_outdoor
'208': bus_shelter
'209': bus_station_indoor
'210': bus_station_outdoor
'211': butchers_shop
'212': cabana
'213': cabin_indoor
'214': cabin_outdoor
'215': cafeteria
'216': call_center
'217': campsite
'218': campus
'219': natural
'220': urban
'221': candy_store
'222': canteen
'223': car_dealership
'224': backseat
'225': frontseat
'226': caravansary
'227': cardroom
'228': cargo_container_interior
'229': airplane
'230': boat
'231': freestanding
'232': carport_indoor
'233': carport_outdoor
'234': carrousel
'235': casino_indoor
'236': catacomb
'237': cathedral_indoor
'238': cathedral_outdoor
'239': catwalk
'240': cavern_indoor
'241': cavern_outdoor
'242': cemetery
'243': chalet
'244': chaparral
'245': chapel
'246': checkout_counter
'247': cheese_factory
'248': chemical_plant
'249': chemistry_lab
'250': chicken_coop_indoor
'251': chicken_coop_outdoor
'252': chicken_farm_indoor
'253': chicken_farm_outdoor
'254': childs_room
'255': choir_loft_interior
'256': church_indoor
'257': circus_tent_indoor
'258': circus_tent_outdoor
'259': city
'260': classroom
'261': clean_room
'262': cliff
'263': booth
'264': room
'265': clock_tower_indoor
'266': cloister_indoor
'267': cloister_outdoor
'268': clothing_store
'269': coast_road
'270': cockpit
'271': coffee_shop
'272': computer_room
'273': conference_center
'274': conference_hall
'275': confessional
'276': control_room
'277': control_tower_indoor
'278': control_tower_outdoor
'279': convenience_store_indoor
'280': convenience_store_outdoor
'281': corn_field
'282': cottage
'283': cottage_garden
'284': courthouse
'285': courtroom
'286': courtyard
'287': covered_bridge_interior
'288': crawl_space
'289': creek
'290': crevasse
'291': library
'292': cybercafe
'293': dacha
'294': dairy_indoor
'295': dairy_outdoor
'296': dam
'297': dance_school
'298': darkroom
'299': delicatessen
'300': dentists_office
'301': department_store
'302': departure_lounge
'303': vegetation
'304': desert_road
'305': diner_indoor
'306': diner_outdoor
'307': dinette_home
'308': vehicle
'309': dining_car
'310': dining_hall
'311': dining_room
'312': dirt_track
'313': discotheque
'314': distillery
'315': ditch
'316': dock
'317': dolmen
'318': donjon
'319': doorway_indoor
'320': doorway_outdoor
'321': dorm_room
'322': downtown
'323': drainage_ditch
'324': dress_shop
'325': dressing_room
'326': drill_rig
'327': driveway
'328': driving_range_indoor
'329': driving_range_outdoor
'330': drugstore
'331': dry_dock
'332': dugout
'333': earth_fissure
'334': editing_room
'335': electrical_substation
'336': elevated_catwalk
'337': door
'338': freight_elevator
'339': elevator_lobby
'340': elevator_shaft
'341': embankment
'342': embassy
'343': engine_room
'344': entrance_hall
'345': escalator_outdoor
'346': escarpment
'347': estuary
'348': excavation
'349': exhibition_hall
'350': fabric_store
'351': factory_indoor
'352': factory_outdoor
'353': fairway
'354': farm
'355': fastfood_restaurant
'356': fence
'357': cargo_deck
'358': ferryboat_indoor
'359': passenger_deck
'360': cultivated
'361': wild
'362': field_road
'363': fire_escape
'364': fire_station
'365': firing_range_indoor
'366': firing_range_outdoor
'367': fish_farm
'368': fishmarket
'369': fishpond
'370': fitting_room_interior
'371': fjord
'372': flea_market_indoor
'373': flea_market_outdoor
'374': floating_dry_dock
'375': flood
'376': florist_shop_indoor
'377': florist_shop_outdoor
'378': fly_bridge
'379': food_court
'380': football_field
'381': broadleaf
'382': needleleaf
'383': forest_fire
'384': forest_path
'385': formal_garden
'386': fort
'387': fortress
'388': foundry_indoor
'389': foundry_outdoor
'390': fountain
'391': freeway
'392': funeral_chapel
'393': funeral_home
'394': furnace_room
'395': galley
'396': game_room
'397': garage_indoor
'398': garage_outdoor
'399': garbage_dump
'400': gasworks
'401': gate
'402': gatehouse
'403': gazebo_interior
'404': general_store_indoor
'405': general_store_outdoor
'406': geodesic_dome_indoor
'407': geodesic_dome_outdoor
'408': ghost_town
'409': gift_shop
'410': glacier
'411': glade
'412': gorge
'413': granary
'414': great_hall
'415': greengrocery
'416': greenhouse_indoor
'417': greenhouse_outdoor
'418': grotto
'419': guardhouse
'420': gulch
'421': gun_deck_indoor
'422': gun_deck_outdoor
'423': gun_store
'424': hacienda
'425': hallway
'426': handball_court
'427': hangar_indoor
'428': hangar_outdoor
'429': hardware_store
'430': hat_shop
'431': hatchery
'432': hayloft
'433': hearth
'434': hedge_maze
'435': hedgerow
'436': heliport
'437': herb_garden
'438': highway
'439': hill
'440': home_office
'441': home_theater
'442': hospital
'443': hospital_room
'444': hot_spring
'445': hot_tub_indoor
'446': hot_tub_outdoor
'447': hotel_outdoor
'448': hotel_breakfast_area
'449': hotel_room
'450': hunting_lodge_indoor
'451': hut
'452': ice_cream_parlor
'453': ice_floe
'454': ice_skating_rink_indoor
'455': ice_skating_rink_outdoor
'456': iceberg
'457': igloo
'458': imaret
'459': incinerator_indoor
'460': incinerator_outdoor
'461': industrial_area
'462': industrial_park
'463': inn_indoor
'464': inn_outdoor
'465': irrigation_ditch
'466': islet
'467': jacuzzi_indoor
'468': jacuzzi_outdoor
'469': jail_indoor
'470': jail_outdoor
'471': jail_cell
'472': japanese_garden
'473': jetty
'474': jewelry_shop
'475': junk_pile
'476': junkyard
'477': jury_box
'478': kasbah
'479': kennel_indoor
'480': kennel_outdoor
'481': kindergarden_classroom
'482': kiosk_outdoor
'483': kitchenette
'484': lab_classroom
'485': labyrinth_indoor
'486': labyrinth_outdoor
'487': lagoon
'488': artificial
'489': landing
'490': landing_deck
'491': laundromat
'492': lava_flow
'493': lavatory
'494': lawn
'495': lean-to
'496': lecture_room
'497': legislative_chamber
'498': levee
'499': library_outdoor
'500': lido_deck_indoor
'501': lift_bridge
'502': lighthouse
'503': limousine_interior
'504': liquor_store_indoor
'505': liquor_store_outdoor
'506': loading_dock
'507': lobby
'508': lock_chamber
'509': loft
'510': lookout_station_indoor
'511': lookout_station_outdoor
'512': lumberyard_indoor
'513': lumberyard_outdoor
'514': machine_shop
'515': manhole
'516': mansion
'517': manufactured_home
'518': market_indoor
'519': marsh
'520': martial_arts_gym
'521': mastaba
'522': maternity_ward
'523': mausoleum
'524': medina
'525': menhir
'526': mesa
'527': mess_hall
'528': mezzanine
'529': military_hospital
'530': military_hut
'531': military_tent
'532': mine
'533': mineshaft
'534': mini_golf_course_indoor
'535': mini_golf_course_outdoor
'536': mission
'537': dry
'538': water
'539': mobile_home
'540': monastery_indoor
'541': monastery_outdoor
'542': moon_bounce
'543': moor
'544': morgue
'545': mosque_indoor
'546': mosque_outdoor
'547': motel
'548': mountain
'549': mountain_path
'550': mountain_road
'551': movie_theater_indoor
'552': movie_theater_outdoor
'553': mudflat
'554': museum_indoor
'555': museum_outdoor
'556': music_store
'557': music_studio
'558': misc
'559': natural_history_museum
'560': naval_base
'561': newsroom
'562': newsstand_indoor
'563': newsstand_outdoor
'564': nightclub
'565': nuclear_power_plant_indoor
'566': nuclear_power_plant_outdoor
'567': nunnery
'568': nursery
'569': nursing_home
'570': oasis
'571': oast_house
'572': observatory_indoor
'573': observatory_outdoor
'574': observatory_post
'575': ocean
'576': office_building
'577': office_cubicles
'578': oil_refinery_indoor
'579': oil_refinery_outdoor
'580': oilrig
'581': operating_room
'582': optician
'583': organ_loft_interior
'584': orlop_deck
'585': ossuary
'586': outcropping
'587': outhouse_indoor
'588': outhouse_outdoor
'589': overpass
'590': oyster_bar
'591': oyster_farm
'592': acropolis
'593': aircraft_carrier_object
'594': amphitheater_indoor
'595': archipelago
'596': questionable
'597': assembly_hall
'598': assembly_plant
'599': awning_deck
'600': back_porch
'601': backdrop
'602': backroom
'603': backstage_outdoor
'604': backstairs_indoor
'605': backwoods
'606': ballet
'607': balustrade
'608': barbeque
'609': basin_outdoor
'610': bath_indoor
'611': bath_outdoor
'612': bathhouse_outdoor
'613': battlefield
'614': bay
'615': booth_outdoor
'616': bottomland
'617': breakfast_table
'618': bric-a-brac
'619': brooklet
'620': bubble_chamber
'621': buffet
'622': bulkhead
'623': bunk_bed
'624': bypass
'625': byroad
'626': cabin_cruiser
'627': cargo_helicopter
'628': cellar
'629': chair_lift
'630': cocktail_lounge
'631': corner
'632': country_house
'633': country_road
'634': customhouse
'635': dance_floor
'636': deck-house_boat_deck_house
'637': deck-house_deck_house
'638': dining_area
'639': diving_board
'640': embrasure
'641': entranceway_indoor
'642': entranceway_outdoor
'643': entryway_outdoor
'644': estaminet
'645': farm_building
'646': farmhouse
'647': feed_bunk
'648': field_house
'649': field_tent_indoor
'650': field_tent_outdoor
'651': fire_trench
'652': fireplace
'653': flashflood
'654': flatlet
'655': floating_dock
'656': flood_plain
'657': flowerbed
'658': flume_indoor
'659': flying_buttress
'660': foothill
'661': forecourt
'662': foreshore
'663': front_porch
'664': garden
'665': gas_well
'666': glen
'667': grape_arbor
'668': grove
'669': guardroom
'670': guesthouse
'671': gymnasium_outdoor
'672': head_shop
'673': hen_yard
'674': hillock
'675': housing_estate
'676': housing_project
'677': howdah
'678': inlet
'679': insane_asylum
'680': outside
'681': juke_joint
'682': jungle
'683': kraal
'684': laboratorywet
'685': landing_strip
'686': layby
'687': lean-to_tent
'688': loge
'689': loggia_outdoor
'690': lower_deck
'691': luggage_van
'692': mansard
'693': meadow
'694': meat_house
'695': megalith
'696': mens_store_outdoor
'697': mental_institution_indoor
'698': mental_institution_outdoor
'699': military_headquarters
'700': millpond
'701': millrace
'702': natural_spring
'703': nursing_home_outdoor
'704': observation_station
'705': open-hearth_furnace
'706': operating_table
'707': outbuilding
'708': palestra
'709': parkway
'710': patio_indoor
'711': pavement
'712': pawnshop_outdoor
'713': pinetum
'714': piste_road
'715': pizzeria_outdoor
'716': powder_room
'717': pumping_station
'718': reception_room
'719': rest_stop
'720': retaining_wall
'721': rift_valley
'722': road
'723': rock_garden
'724': rotisserie
'725': safari_park
'726': salon
'727': saloon
'728': sanatorium
'729': science_laboratory
'730': scrubland
'731': scullery
'732': seaside
'733': semidesert
'734': shelter
'735': shelter_deck
'736': shelter_tent
'737': shore
'738': shrubbery
'739': sidewalk
'740': snack_bar
'741': snowbank
'742': stage_set
'743': stall
'744': stateroom
'745': store
'746': streetcar_track
'747': student_center
'748': study_hall
'749': sugar_refinery
'750': sunroom
'751': supply_chamber
'752': t-bar_lift
'753': tannery
'754': teahouse
'755': threshing_floor
'756': ticket_window_indoor
'757': tidal_basin
'758': tidal_river
'759': tiltyard
'760': tollgate
'761': tomb
'762': tract_housing
'763': trellis
'764': truck_stop
'765': upper_balcony
'766': vestibule
'767': vinery
'768': walkway
'769': war_room
'770': washroom
'771': water_fountain
'772': water_gate
'773': waterscape
'774': waterway
'775': wetland
'776': widows_walk_indoor
'777': windstorm
'778': packaging_plant
'779': pagoda
'780': paper_mill
'781': park
'782': parking_garage_indoor
'783': parking_garage_outdoor
'784': parking_lot
'785': parlor
'786': particle_accelerator
'787': party_tent_indoor
'788': party_tent_outdoor
'789': pasture
'790': pavilion
'791': pawnshop
'792': pedestrian_overpass_indoor
'793': penalty_box
'794': pet_shop
'795': pharmacy
'796': physics_laboratory
'797': piano_store
'798': picnic_area
'799': pier
'800': pig_farm
'801': pilothouse_indoor
'802': pilothouse_outdoor
'803': pitchers_mound
'804': pizzeria
'805': planetarium_indoor
'806': planetarium_outdoor
'807': plantation_house
'808': playground
'809': playroom
'810': plaza
'811': podium_indoor
'812': podium_outdoor
'813': police_station
'814': pond
'815': pontoon_bridge
'816': poop_deck
'817': porch
'818': portico
'819': portrait_studio
'820': postern
'821': power_plant_outdoor
'822': print_shop
'823': priory
'824': promenade
'825': promenade_deck
'826': pub_indoor
'827': pub_outdoor
'828': pulpit
'829': putting_green
'830': quadrangle
'831': quicksand
'832': quonset_hut_indoor
'833': racecourse
'834': raceway
'835': raft
'836': railroad_track
'837': railway_yard
'838': rainforest
'839': ramp
'840': ranch
'841': ranch_house
'842': reading_room
'843': reception
'844': recreation_room
'845': rectory
'846': recycling_plant_indoor
'847': refectory
'848': repair_shop
'849': residential_neighborhood
'850': resort
'851': rest_area
'852': restaurant
'853': restaurant_kitchen
'854': restaurant_patio
'855': restroom_indoor
'856': restroom_outdoor
'857': revolving_door
'858': riding_arena
'859': river
'860': road_cut
'861': rock_arch
'862': roller_skating_rink_indoor
'863': roller_skating_rink_outdoor
'864': rolling_mill
'865': roof
'866': roof_garden
'867': root_cellar
'868': rope_bridge
'869': roundabout
'870': roundhouse
'871': rubble
'872': ruin
'873': runway
'874': sacristy
'875': salt_plain
'876': sand_trap
'877': sandbar
'878': sauna
'879': savanna
'880': sawmill
'881': schoolhouse
'882': schoolyard
'883': science_museum
'884': scriptorium
'885': sea_cliff
'886': seawall
'887': security_check_point
'888': server_room
'889': sewer
'890': sewing_room
'891': shed
'892': shipping_room
'893': shipyard_outdoor
'894': shoe_shop
'895': shopping_mall_indoor
'896': shopping_mall_outdoor
'897': shower
'898': shower_room
'899': shrine
'900': signal_box
'901': sinkhole
'902': ski_jump
'903': ski_lodge
'904': ski_resort
'905': ski_slope
'906': sky
'907': skywalk_indoor
'908': skywalk_outdoor
'909': slum
'910': snowfield
'911': massage_room
'912': mineral_bath
'913': spillway
'914': sporting_goods_store
'915': squash_court
'916': stable
'917': baseball
'918': stadium_outdoor
'919': stage_indoor
'920': stage_outdoor
'921': staircase
'922': starting_gate
'923': steam_plant_outdoor
'924': steel_mill_indoor
'925': storage_room
'926': storm_cellar
'927': street
'928': strip_mall
'929': strip_mine
'930': student_residence
'931': submarine_interior
'932': sun_deck
'933': sushi_bar
'934': swamp
'935': swimming_hole
'936': swimming_pool_indoor
'937': synagogue_indoor
'938': synagogue_outdoor
'939': taxistand
'940': taxiway
'941': tea_garden
'942': tearoom
'943': teashop
'944': television_room
'945': east_asia
'946': mesoamerican
'947': south_asia
'948': western
'949': tennis_court_indoor
'950': tennis_court_outdoor
'951': tent_outdoor
'952': terrace_farm
'953': indoor_round
'954': indoor_seats
'955': theater_outdoor
'956': thriftshop
'957': throne_room
'958': ticket_booth
'959': tobacco_shop_indoor
'960': toll_plaza
'961': tollbooth
'962': topiary_garden
'963': tower
'964': town_house
'965': toyshop
'966': track_outdoor
'967': trading_floor
'968': trailer_park
'969': train_interior
'970': train_station_outdoor
'971': station
'972': tree_farm
'973': tree_house
'974': trench
'975': trestle_bridge
'976': tundra
'977': rail_indoor
'978': rail_outdoor
'979': road_indoor
'980': road_outdoor
'981': turkish_bath
'982': ocean_deep
'983': ocean_shallow
'984': utility_room
'985': valley
'986': van_interior
'987': vegetable_garden
'988': velodrome_indoor
'989': velodrome_outdoor
'990': ventilation_shaft
'991': veranda
'992': vestry
'993': veterinarians_office
'994': videostore
'995': village
'996': vineyard
'997': volcano
'998': volleyball_court_indoor
'999': volleyball_court_outdoor
'1000': voting_booth
'1001': waiting_room
'1002': walk_in_freezer
'1003': warehouse_indoor
'1004': warehouse_outdoor
'1005': washhouse_indoor
'1006': washhouse_outdoor
'1007': watchtower
'1008': water_mill
'1009': water_park
'1010': water_tower
'1011': water_treatment_plant_indoor
'1012': water_treatment_plant_outdoor
'1013': block
'1014': cascade
'1015': cataract
'1016': fan
'1017': plunge
'1018': watering_hole
'1019': weighbridge
'1020': wet_bar
'1021': wharf
'1022': wheat_field
'1023': whispering_gallery
'1024': widows_walk_interior
'1025': windmill
'1026': window_seat
'1027': barrel_storage
'1028': winery
'1029': witness_stand
'1030': woodland
'1031': workroom
'1032': workshop
'1033': wrestling_ring_indoor
'1034': wrestling_ring_outdoor
'1035': yard
'1036': youth_hostel
'1037': zen_garden
'1038': ziggurat
'1039': zoo
'1040': forklift
'1041': hollow
'1042': hutment
'1043': pueblo
'1044': vat
'1045': perfume_shop
'1046': steel_mill_outdoor
'1047': orchestra_pit
'1048': bridle_path
'1049': lyceum
'1050': one-way_street
'1051': parade_ground
'1052': pump_room
'1053': recycling_plant_outdoor
'1054': chuck_wagon
splits:
- name: train
num_bytes: 8468086
num_examples: 20210
- name: test
num_bytes: 744607
num_examples: 3352
- name: validation
num_bytes: 838032
num_examples: 2000
download_size: 1179202534
dataset_size: 10050725
- config_name: instance_segmentation
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 862611544
num_examples: 20210
- name: test
num_bytes: 212493928
num_examples: 3352
- name: validation
num_bytes: 87502294
num_examples: 2000
download_size: 1197393920
dataset_size: 1162607766
---
# Dataset Card for MIT Scene Parsing Benchmark
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MIT Scene Parsing Benchmark homepage](http://sceneparsing.csail.mit.edu/)
- **Repository:** [Scene Parsing repository (Caffe/Torch7)](https://github.com/CSAILVision/sceneparsing),[Scene Parsing repository (PyTorch)](https://github.com/CSAILVision/semantic-segmentation-pytorch) and [Instance Segmentation repository](https://github.com/CSAILVision/placeschallenge/tree/master/instancesegmentation)
- **Paper:** [Scene Parsing through ADE20K Dataset](http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf) and [Semantic Understanding of Scenes through ADE20K Dataset](https://arxiv.org/abs/1608.05442)
- **Leaderboard:** [MIT Scene Parsing Benchmark leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers)
- **Point of Contact:** [Bolei Zhou](mailto:bzhou@ie.cuhk.edu.hk)
### Dataset Summary
Scene parsing is the task of segmenting and parsing an image into different image regions associated with semantic categories, such as sky, road, person, and bed. MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. There are in total 150 semantic categories included for evaluation, which include e.g. sky, road, grass, and discrete objects like person, car, bed. Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene.
The goal of this benchmark is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bedThis benchamark is similar to semantic segmentation tasks in COCO and Pascal Dataset, but the data is more scene-centric and with a diverse range of object categories. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts.
### Supported Tasks and Leaderboards
- `scene-parsing`: The goal of this task is to segment the whole image densely into semantic classes (image regions), where each pixel is assigned a class label such as the region of *tree* and the region of *building*.
[The leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers) for this task ranks the models by considering the mean of the pixel-wise accuracy and class-wise IoU as the final score. Pixel-wise accuracy indicates the ratio of pixels which are correctly predicted, while class-wise IoU indicates the Intersection of Union of pixels averaged over all the 150 semantic categories. Refer to the [Development Kit](https://github.com/CSAILVision/sceneparsing) for the detail.
- `instance-segmentation`: The goal of this task is to detect the object instances inside an image and further generate the precise segmentation masks of the objects. Its difference compared to the task of scene parsing is that in scene parsing there is no instance concept for the segmented regions, instead in instance segmentation if there are three persons in the scene, the network is required to segment each one of the person regions. This task doesn't have an active leaderboard. The performance of the instance segmentation algorithms is evaluated by Average Precision (AP, or mAP), following COCO evaluation metrics. For each image, at most 255 top-scoring instance masks are taken across all categories. Each instance mask prediction is only considered if its IoU with ground truth is above a certain threshold. There are 10 IoU thresholds of 0.50:0.05:0.95 for evaluation. The final AP is averaged across 10 IoU thresholds and 100 categories. You can refer to COCO evaluation page for more explanation: http://mscoco.org/dataset/#detections-eval
### Languages
English.
## Dataset Structure
### Data Instances
A data point comprises an image and its annotation mask, which is `None` in the testing set. The `scene_parsing` configuration has an additional `scene_category` field.
#### `scene_parsing`
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=683x512 at 0x1FF32A3EDA0>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=683x512 at 0x1FF32E5B978>,
'scene_category': 0
}
```
#### `instance_segmentation`
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=256x256 at 0x20B51B5C400>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=256x256 at 0x20B57051B38>
}
```
### Data Fields
#### `scene_parsing`
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `annotation`: A `PIL.Image.Image` object containing the annotation mask.
- `scene_category`: A scene category for the image (e.g. `airport_terminal`, `canyon`, `mobile_home`).
> **Note**: annotation masks contain labels ranging from 0 to 150, where 0 refers to "other objects". Those pixels are not considered in the official evaluation. Refer to [this file](https://github.com/CSAILVision/sceneparsing/blob/master/objectInfo150.csv) for the information about the labels of the 150 semantic categories, including indices, pixel ratios and names.
#### `instance_segmentation`
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `annotation`: A `PIL.Image.Image` object containing the annotation mask.
> **Note**: in the instance annotation masks, the R(ed) channel encodes category ID, and the G(reen) channel encodes instance ID. Each object instance has a unique instance ID regardless of its category ID. In the dataset, all images have <256 object instances. Refer to [this file (train split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_train.txt) and to [this file (validation split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_val.txt) for the information about the labels of the 100 semantic categories. To find the mapping between the semantic categories for `instance_segmentation` and `scene_parsing`, refer to [this file](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/categoryMapping.txt).
### Data Splits
The data is split into training, test and validation set. The training data contains 20210 images, the testing data contains 3352 images and the validation data contains 2000 images.
## Dataset Creation
### Curation Rationale
The rationale from the paper for the ADE20K dataset from which this benchmark originates:
> Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and
in some cases even parts of parts.
> The motivation of this work is to collect a dataset that has densely annotated images (every pixel has a semantic label) with a large and an unrestricted open vocabulary. The
images in our dataset are manually segmented in great detail, covering a diverse set of scenes, object and object part categories. The challenge for collecting such annotations is finding reliable annotators, as well as the fact that labeling is difficult if the class list is not defined in advance. On the other hand, open vocabulary naming also suffers from naming inconsistencies across different annotators. In contrast,
our dataset was annotated by a single expert annotator, providing extremely detailed and exhaustive image annotations. On average, our annotator labeled 29 annotation segments per image, compared to the 16 segments per image labeled by external annotators (like workers from Amazon Mechanical Turk). Furthermore, the data consistency and quality are much higher than that of external annotators.
### Source Data
#### Initial Data Collection and Normalization
Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database.
This benchmark was built by selecting the top 150 objects ranked by their total pixel ratios from the ADE20K dataset. As the original images in the ADE20K dataset have various sizes, for simplicity those large-sized images were rescaled to make their minimum heights or widths as 512. Among the 150 objects, there are 35 stuff classes (i.e., wall, sky, road) and 115 discrete objects (i.e., car, person, table). The annotated pixels of the 150 objects occupy 92.75% of all the pixels in the dataset, where the stuff classes occupy 60.92%, and discrete objects occupy 31.83%.
#### Who are the source language producers?
The same as in the LabelMe, SUN datasets, and Places datasets.
### Annotations
#### Annotation process
Annotation process for the ADE20K dataset:
> **Image Annotation.** For our dataset, we are interested in having a diverse set of scenes with dense annotations of all the objects present. Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database. Images were annotated by a single expert worker using the LabelMe interface. Fig. 2 shows a snapshot of the annotation interface and one fully segmented image. The worker provided three types of annotations: object segments with names, object parts, and attributes. All object instances are segmented independently so that the dataset could be used to train and evaluate detection or segmentation algorithms. Datasets such as COCO, Pascal or Cityscape start by defining a set of object categories of interest. However, when labeling all the objects in a scene, working with a predefined list of objects is not possible as new categories
appear frequently (see fig. 5.d). Here, the annotator created a dictionary of visual concepts where new classes were added constantly to ensure consistency in object naming. Object parts are associated with object instances. Note that parts can have parts too, and we label these associations as well. For example, the ‘rim’ is a part of a ‘wheel’, which in turn is part of a ‘car’. A ‘knob’ is a part of a ‘door’
that can be part of a ‘cabinet’. The total part hierarchy has a depth of 3. The object and part hierarchy is in the supplementary materials.
> **Annotation Consistency.** Defining a labeling protocol is relatively easy when the labeling task is restricted to a fixed list of object classes, however it becomes challenging when the class list is openended. As the goal is to label all the objects within each image, the list of classes grows unbounded. >Many object classes appear only a few times across the entire collection of images. However, those rare >object classes cannot be ignored as they might be important elements for the interpretation of the scene. >Labeling in these conditions becomes difficult because we need to keep a growing list of all the object >classes in order to have a consistent naming across the entire dataset. Despite the annotator’s best effort, >the process is not free of noise. To analyze the annotation consistency we took a subset of 61 randomly >chosen images from the validation set, then asked our annotator to annotate them again (there is a time difference of six months). One expects that there are some differences between the two annotations. A few examples are shown in Fig 3. On average, 82.4% of the pixels got the same label. The remaining 17.6% of pixels had some errors for which we grouped into three error types as follows:
>
> • Segmentation quality: Variations in the quality of segmentation and outlining of the object boundary. One typical source of error arises when segmenting complex objects such as buildings and trees, which can be segmented with different degrees of precision. 5.7% of the pixels had this type of error.
>
> • Object naming: Differences in object naming (due to ambiguity or similarity between concepts, for instance calling a big car a ‘car’ in one segmentation and a ‘truck’ in the another one, or a ‘palm tree’ a‘tree’. 6.0% of the pixels had naming issues. These errors can be reduced by defining a very precise terminology, but this becomes much harder with a large growing vocabulary.
>
> • Segmentation quantity: Missing objects in one of the two segmentations. There is a very large number of objects in each image and some images might be annotated more thoroughly than others. For example, in the third column of Fig 3 the annotator missed some small objects in different annotations. 5.9% of the pixels are due to missing labels. A similar issue existed in segmentation datasets such as the Berkeley Image segmentation dataset.
>
> The median error values for the three error types are: 4.8%, 0.3% and 2.6% showing that the mean value is dominated by a few images, and that the most common type of error is segmentation quality.
To further compare the annotation done by our single expert annotator and the AMT-like annotators, 20 images
from the validation set are annotated by two invited external annotators, both with prior experience in image labeling. The first external annotator had 58.5% of inconsistent pixels compared to the segmentation provided by our annotator, and the second external annotator had 75% of the inconsistent pixels. Many of these inconsistencies are due to the poor quality of the segmentations provided by external annotators (as it has been observed with AMT which requires multiple verification steps for quality control). For the
best external annotator (the first one), 7.9% of pixels have inconsistent segmentations (just slightly worse than our annotator), 14.9% have inconsistent object naming and 35.8% of the pixels correspond to missing objects, which is due to the much smaller number of objects annotated by the external annotator in comparison with the ones annotated by our expert annotator. The external annotators labeled on average 16 segments per image while our annotator provided 29 segments per image.
#### Who are the annotators?
Three expert annotators and the AMT-like annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Refer to the `Annotation Consistency` subsection of `Annotation Process`.
## Additional Information
### Dataset Curators
Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso and Antonio Torralba.
### Licensing Information
The MIT Scene Parsing Benchmark dataset is licensed under a [BSD 3-Clause License](https://github.com/CSAILVision/sceneparsing/blob/master/LICENSE).
### Citation Information
```bibtex
@inproceedings{zhou2017scene,
title={Scene Parsing through ADE20K Dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2017}
}
@article{zhou2016semantic,
title={Semantic understanding of scenes through the ade20k dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
journal={arXiv preprint arXiv:1608.05442},
year={2016}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
cyanic-selkie/wikianc | 2023-09-05T14:22:32.000Z | [
"task_categories:token-classification",
"annotations_creators:machine-generated",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"language:en",
"language:ceb",
"language:de",
"language:sv",
"language:fr",
"language:nl",
"language:ru",
"language:es",
"language:it",
"language:arz",
"language:pl",
"language:ja",
"language:zh",
"language:vi",
"language:uk",
"language:war",
"language:ar",
"language:pt",
"language:fa",
"language:ca",
"language:sr",
"language:id",
"language:ko",
"language:no",
"language:ce",
"language:fi",
"language:cs",
"language:tr",
"language:hu",
"language:tt",
"language:sh",
"language:ro",
"language:eu",
"language:ms",
"language:eo",
"language:he",
"language:hy",
"language:da",
"language:bg",
"language:cy",
"language:sk",
"language:azb",
"language:uz",
"language:et",
"language:be",
"language:kk",
"language:min",
"language:el",
"language:hr",
"language:lt",
"language:gl",
"language:az",
"language:ur",
"language:sl",
"language:lld",
"language:ka",
"language:nn",
"language:hi",
"language:th",
"language:ta",
"language:bn",
"language:la",
"language:mk",
"language:ast",
"language:lv",
"language:af",
"language:tg",
"language:my",
"language:mg",
"language:mr",
"language:sq",
"language:bs",
"language:oc",
"language:te",
"language:ml",
"language:nds",
"language:br",
"language:ky",
"language:sw",
"language:jv",
"language:lmo",
"language:new",
"language:pnb",
"language:vec",
"language:ht",
"language:pms",
"language:ba",
"language:lb",
"language:su",
"language:ku",
"language:ga",
"language:szl",
"language:is",
"language:fy",
"language:cv",
"language:ckb",
"language:pa",
"language:tl",
"language:an",
"language:wuu",
"language:diq",
"language:io",
"language:sco",
"language:vo",
"language:yo",
"language:ne",
"language:ia",
"language:kn",
"language:gu",
"language:als",
"language:ha",
"language:avk",
"language:bar",
"language:crh",
"language:scn",
"language:bpy",
"language:qu",
"language:mn",
"language:nv",
"language:xmf",
"language:ban",
"language:si",
"language:tum",
"language:ps",
"language:ig",
"language:frr",
"language:os",
"language:mzn",
"language:or",
"language:sah",
"language:cdo",
"language:gd",
"language:bug",
"language:yi",
"language:sd",
"language:ilo",
"language:am",
"language:nap",
"language:li",
"language:bcl",
"language:fo",
"language:gor",
"language:hsb",
"language:mai",
"language:shn",
"language:eml",
"language:ace",
"language:sa",
"language:as",
"language:wa",
"language:ie",
"language:hyw",
"language:lij",
"language:mhr",
"language:zu",
"language:sn",
"language:hif",
"language:mrj",
"language:bjn",
"language:km",
"language:mni",
"language:hak",
"language:pam",
"language:sat",
"language:rue",
"language:nso",
"language:bh",
"language:so",
"language:mi",
"language:se",
"language:myv",
"language:vls",
"language:dag",
"language:sc",
"language:co",
"language:ary",
"language:kw",
"language:bo",
"language:vep",
"language:glk",
"language:tk",
"language:kab",
"language:gan",
"language:rw",
"language:ab",
"language:gv",
"language:ug",
"language:nah",
"language:zea",
"language:skr",
"language:frp",
"language:udm",
"language:pcd",
"language:mt",
"language:kv",
"language:csb",
"language:gn",
"language:smn",
"language:ay",
"language:nrm",
"language:ks",
"language:lez",
"language:lfn",
"language:olo",
"language:mwl",
"language:lo",
"language:stq",
"language:ang",
"language:mdf",
"language:fur",
"language:rm",
"language:lad",
"language:kaa",
"language:gom",
"language:ext",
"language:koi",
"language:tyv",
"language:pap",
"language:av",
"language:dsb",
"language:ln",
"language:dty",
"language:tw",
"language:dv",
"language:ksh",
"language:za",
"language:gag",
"language:bxr",
"language:pfl",
"language:lg",
"language:szy",
"language:pag",
"language:blk",
"language:pi",
"language:tay",
"language:haw",
"language:awa",
"language:inh",
"language:krc",
"language:xal",
"language:pdc",
"language:to",
"language:atj",
"language:tcy",
"language:arc",
"language:mnw",
"language:shi",
"language:jam",
"language:kbp",
"language:wo",
"language:anp",
"language:kbd",
"language:nia",
"language:om",
"language:nov",
"language:ki",
"language:nqo",
"language:bi",
"language:xh",
"language:tpi",
"language:ff",
"language:tet",
"language:jbo",
"language:fj",
"language:kg",
"language:lbe",
"language:ty",
"language:cu",
"language:guw",
"language:trv",
"language:ami",
"language:srn",
"language:sm",
"language:mad",
"language:alt",
"language:ltg",
"language:gcr",
"language:chr",
"language:tn",
"language:ny",
"language:st",
"language:pih",
"language:got",
"language:rmy",
"language:ee",
"language:pcm",
"language:bm",
"language:ss",
"language:gpe",
"language:ts",
"language:ve",
"language:kcg",
"language:chy",
"language:rn",
"language:ch",
"language:gur",
"language:ik",
"language:ady",
"language:fat",
"language:pnt",
"language:guc",
"language:iu",
"language:pwn",
"language:sg",
"language:din",
"language:ti",
"language:kl",
"language:dz",
"language:cr",
"license:cc-by-sa-4.0",
"wikidata",
"wikipedia",
"wikification",
"named-entity-linking",
"nel",
"entity-linking",
"el",
"named-entity-disambiguation",
"ned",
"entity-disambiguation",
"ed",
"region:us"
] | cyanic-selkie | null | null | null | 2 | 1,861 | ---
license: cc-by-sa-4.0
pretty_name: WikiAnc
annotations_creators:
- machine-generated
- crowdsourced
language_creators:
- machine-generated
- crowdsourced
task_categories:
- token-classification
multilinguality:
- multilingual
language:
- en
- ceb
- de
- sv
- fr
- nl
- ru
- es
- it
- arz
- pl
- ja
- zh
- vi
- uk
- war
- ar
- pt
- fa
- ca
- sr
- id
- ko
- 'no'
- ce
- fi
- cs
- tr
- hu
- tt
- sh
- ro
#- zh-min-nan
- eu
- ms
- eo
- he
- hy
- da
- bg
- cy
- sk
- azb
- uz
- et
#- simple
- be
- kk
- min
- el
- hr
- lt
- gl
- az
- ur
- sl
- lld
- ka
- nn
- hi
- th
- ta
- bn
- la
- mk
#- zh-yue
- ast
- lv
- af
- tg
- my
- mg
- mr
- sq
- bs
- oc
- te
- ml
- nds
- br
- ky
- sw
- jv
- lmo
- new
- pnb
- vec
- ht
- pms
- ba
- lb
- su
- ku
- ga
- szl
- is
- fy
- cv
- ckb
- pa
- tl
- an
- wuu
- diq
- io
- sco
- vo
- yo
- ne
- ia
- kn
- gu
- als
- ha
- avk
- bar
- crh
- scn
- bpy
- qu
- mn
- nv
- xmf
- ban
- si
- tum
- ps
- ig
- frr
- os
- mzn
#- bat-smg
- or
- sah
- cdo
- gd
- bug
- yi
- sd
- ilo
- am
- nap
- li
- bcl
- fo
- gor
- hsb
#- map-bms
- mai
- shn
- eml
- ace
#- zh-classical
- sa
- as
- wa
- ie
- hyw
- lij
- mhr
- zu
- sn
- hif
- mrj
- bjn
- km
- mni
- hak
#- roa-tara
- pam
- sat
- rue
- nso
- bh
- so
- mi
- se
- myv
- vls
#- nds-nl
- dag
- sc
- co
- ary
- kw
- bo
- vep
- glk
- tk
- kab
- gan
- rw
#- fiu-vro
- ab
- gv
- ug
- nah
- zea
- skr
- frp
- udm
- pcd
- mt
- kv
- csb
- gn
- smn
- ay
- nrm
- ks
- lez
- lfn
- olo
- mwl
- lo
- stq
- ang
- mdf
- fur
- rm
- lad
- kaa
- gom
- ext
- koi
- tyv
- pap
- av
- dsb
- ln
- dty
- tw
#- cbk-zam
- dv
- ksh
- za
- gag
- bxr
- pfl
- lg
- szy
- pag
- blk
- pi
- tay
- haw
- awa
- inh
- krc
- xal
- pdc
- to
- atj
- tcy
- arc
- mnw
- shi
- jam
- kbp
- wo
- anp
- kbd
- nia
- om
- nov
- ki
- nqo
- bi
- xh
- tpi
- ff
- tet
#- roa-rup
- jbo
- fj
- kg
- lbe
- ty
- cu
- guw
- trv
- ami
- srn
- sm
- mad
- alt
- ltg
- gcr
- chr
- tn
- ny
- st
- pih
- got
- rmy
- ee
- pcm
- bm
- ss
- gpe
- ts
- ve
- kcg
- chy
- rn
- ch
- gur
- ik
- ady
- fat
- pnt
- guc
- iu
- pwn
- sg
- din
- ti
- kl
- dz
- cr
tags:
- wikidata
- wikipedia
- wikification
- named-entity-linking
- nel
- entity-linking
- el
- named-entity-disambiguation
- ned
- entity-disambiguation
- ed
configs:
- config_name: ab
data_files:
- split: train
path: "data/ab/train.parquet"
- split: validation
path: "data/ab/validation.parquet"
- config_name: ace
data_files:
- split: train
path: "data/ace/train.parquet"
- split: validation
path: "data/ace/validation.parquet"
- config_name: ady
data_files:
- split: train
path: "data/ady/train.parquet"
- split: validation
path: "data/ady/validation.parquet"
- config_name: af
data_files:
- split: train
path: "data/af/train.parquet"
- split: validation
path: "data/af/validation.parquet"
- config_name: als
data_files:
- split: train
path: "data/als/train.parquet"
- split: validation
path: "data/als/validation.parquet"
- config_name: alt
data_files:
- split: train
path: "data/alt/train.parquet"
- split: validation
path: "data/alt/validation.parquet"
- config_name: am
data_files:
- split: train
path: "data/am/train.parquet"
- split: validation
path: "data/am/validation.parquet"
- config_name: ami
data_files:
- split: train
path: "data/ami/train.parquet"
- split: validation
path: "data/ami/validation.parquet"
- config_name: an
data_files:
- split: train
path: "data/an/train.parquet"
- split: validation
path: "data/an/validation.parquet"
- config_name: ang
data_files:
- split: train
path: "data/ang/train.parquet"
- split: validation
path: "data/ang/validation.parquet"
- config_name: anp
data_files:
- split: train
path: "data/anp/train.parquet"
- split: validation
path: "data/anp/validation.parquet"
- config_name: ar
data_files:
- split: train
path: "data/ar/train.parquet"
- split: validation
path: "data/ar/validation.parquet"
- config_name: arc
data_files:
- split: train
path: "data/arc/train.parquet"
- split: validation
path: "data/arc/validation.parquet"
- config_name: ary
data_files:
- split: train
path: "data/ary/train.parquet"
- split: validation
path: "data/ary/validation.parquet"
- config_name: arz
data_files:
- split: train
path: "data/arz/train.parquet"
- split: validation
path: "data/arz/validation.parquet"
- config_name: as
data_files:
- split: train
path: "data/as/train.parquet"
- split: validation
path: "data/as/validation.parquet"
- config_name: ast
data_files:
- split: train
path: "data/ast/train.parquet"
- split: validation
path: "data/ast/validation.parquet"
- config_name: atj
data_files:
- split: train
path: "data/atj/train.parquet"
- split: validation
path: "data/atj/validation.parquet"
- config_name: av
data_files:
- split: train
path: "data/av/train.parquet"
- split: validation
path: "data/av/validation.parquet"
- config_name: avk
data_files:
- split: train
path: "data/avk/train.parquet"
- split: validation
path: "data/avk/validation.parquet"
- config_name: awa
data_files:
- split: train
path: "data/awa/train.parquet"
- split: validation
path: "data/awa/validation.parquet"
- config_name: ay
data_files:
- split: train
path: "data/ay/train.parquet"
- split: validation
path: "data/ay/validation.parquet"
- config_name: az
data_files:
- split: train
path: "data/az/train.parquet"
- split: validation
path: "data/az/validation.parquet"
- config_name: azb
data_files:
- split: train
path: "data/azb/train.parquet"
- split: validation
path: "data/azb/validation.parquet"
- config_name: ba
data_files:
- split: train
path: "data/ba/train.parquet"
- split: validation
path: "data/ba/validation.parquet"
- config_name: ban
data_files:
- split: train
path: "data/ban/train.parquet"
- split: validation
path: "data/ban/validation.parquet"
- config_name: bar
data_files:
- split: train
path: "data/bar/train.parquet"
- split: validation
path: "data/bar/validation.parquet"
- config_name: bat_smg
data_files:
- split: train
path: "data/bat_smg/train.parquet"
- split: validation
path: "data/bat_smg/validation.parquet"
- config_name: bcl
data_files:
- split: train
path: "data/bcl/train.parquet"
- split: validation
path: "data/bcl/validation.parquet"
- config_name: be
data_files:
- split: train
path: "data/be/train.parquet"
- split: validation
path: "data/be/validation.parquet"
- config_name: bg
data_files:
- split: train
path: "data/bg/train.parquet"
- split: validation
path: "data/bg/validation.parquet"
- config_name: bh
data_files:
- split: train
path: "data/bh/train.parquet"
- split: validation
path: "data/bh/validation.parquet"
- config_name: bi
data_files:
- split: train
path: "data/bi/train.parquet"
- split: validation
path: "data/bi/validation.parquet"
- config_name: bjn
data_files:
- split: train
path: "data/bjn/train.parquet"
- split: validation
path: "data/bjn/validation.parquet"
- config_name: blk
data_files:
- split: train
path: "data/blk/train.parquet"
- split: validation
path: "data/blk/validation.parquet"
- config_name: bm
data_files:
- split: train
path: "data/bm/train.parquet"
- split: validation
path: "data/bm/validation.parquet"
- config_name: bn
data_files:
- split: train
path: "data/bn/train.parquet"
- split: validation
path: "data/bn/validation.parquet"
- config_name: bo
data_files:
- split: train
path: "data/bo/train.parquet"
- split: validation
path: "data/bo/validation.parquet"
- config_name: bpy
data_files:
- split: train
path: "data/bpy/train.parquet"
- split: validation
path: "data/bpy/validation.parquet"
- config_name: br
data_files:
- split: train
path: "data/br/train.parquet"
- split: validation
path: "data/br/validation.parquet"
- config_name: bs
data_files:
- split: train
path: "data/bs/train.parquet"
- split: validation
path: "data/bs/validation.parquet"
- config_name: bug
data_files:
- split: train
path: "data/bug/train.parquet"
- split: validation
path: "data/bug/validation.parquet"
- config_name: bxr
data_files:
- split: train
path: "data/bxr/train.parquet"
- split: validation
path: "data/bxr/validation.parquet"
- config_name: ca
data_files:
- split: train
path: "data/ca/train.parquet"
- split: validation
path: "data/ca/validation.parquet"
- config_name: cbk_zam
data_files:
- split: train
path: "data/cbk_zam/train.parquet"
- split: validation
path: "data/cbk_zam/validation.parquet"
- config_name: cdo
data_files:
- split: train
path: "data/cdo/train.parquet"
- split: validation
path: "data/cdo/validation.parquet"
- config_name: ce
data_files:
- split: train
path: "data/ce/train.parquet"
- split: validation
path: "data/ce/validation.parquet"
- config_name: ceb
data_files:
- split: train
path: "data/ceb/train.parquet"
- split: validation
path: "data/ceb/validation.parquet"
- config_name: ch
data_files:
- split: train
path: "data/ch/train.parquet"
- split: validation
path: "data/ch/validation.parquet"
- config_name: chr
data_files:
- split: train
path: "data/chr/train.parquet"
- split: validation
path: "data/chr/validation.parquet"
- config_name: chy
data_files:
- split: train
path: "data/chy/train.parquet"
- split: validation
path: "data/chy/validation.parquet"
- config_name: ckb
data_files:
- split: train
path: "data/ckb/train.parquet"
- split: validation
path: "data/ckb/validation.parquet"
- config_name: co
data_files:
- split: train
path: "data/co/train.parquet"
- split: validation
path: "data/co/validation.parquet"
- config_name: cr
data_files:
- split: train
path: "data/cr/train.parquet"
- split: validation
path: "data/cr/validation.parquet"
- config_name: crh
data_files:
- split: train
path: "data/crh/train.parquet"
- split: validation
path: "data/crh/validation.parquet"
- config_name: cs
data_files:
- split: train
path: "data/cs/train.parquet"
- split: validation
path: "data/cs/validation.parquet"
- config_name: csb
data_files:
- split: train
path: "data/csb/train.parquet"
- split: validation
path: "data/csb/validation.parquet"
- config_name: cu
data_files:
- split: train
path: "data/cu/train.parquet"
- split: validation
path: "data/cu/validation.parquet"
- config_name: cv
data_files:
- split: train
path: "data/cv/train.parquet"
- split: validation
path: "data/cv/validation.parquet"
- config_name: cy
data_files:
- split: train
path: "data/cy/train.parquet"
- split: validation
path: "data/cy/validation.parquet"
- config_name: da
data_files:
- split: train
path: "data/da/train.parquet"
- split: validation
path: "data/da/validation.parquet"
- config_name: dag
data_files:
- split: train
path: "data/dag/train.parquet"
- split: validation
path: "data/dag/validation.parquet"
- config_name: de
data_files:
- split: train
path: "data/de/train.parquet"
- split: validation
path: "data/de/validation.parquet"
- config_name: din
data_files:
- split: train
path: "data/din/train.parquet"
- split: validation
path: "data/din/validation.parquet"
- config_name: diq
data_files:
- split: train
path: "data/diq/train.parquet"
- split: validation
path: "data/diq/validation.parquet"
- config_name: dsb
data_files:
- split: train
path: "data/dsb/train.parquet"
- split: validation
path: "data/dsb/validation.parquet"
- config_name: dty
data_files:
- split: train
path: "data/dty/train.parquet"
- split: validation
path: "data/dty/validation.parquet"
- config_name: dv
data_files:
- split: train
path: "data/dv/train.parquet"
- split: validation
path: "data/dv/validation.parquet"
- config_name: dz
data_files:
- split: train
path: "data/dz/train.parquet"
- split: validation
path: "data/dz/validation.parquet"
- config_name: ee
data_files:
- split: train
path: "data/ee/train.parquet"
- split: validation
path: "data/ee/validation.parquet"
- config_name: el
data_files:
- split: train
path: "data/el/train.parquet"
- split: validation
path: "data/el/validation.parquet"
- config_name: eml
data_files:
- split: train
path: "data/eml/train.parquet"
- split: validation
path: "data/eml/validation.parquet"
- config_name: en
data_files:
- split: train
path: "data/en/train.parquet"
- split: validation
path: "data/en/validation.parquet"
- config_name: eo
data_files:
- split: train
path: "data/eo/train.parquet"
- split: validation
path: "data/eo/validation.parquet"
- config_name: es
data_files:
- split: train
path: "data/es/train.parquet"
- split: validation
path: "data/es/validation.parquet"
- config_name: et
data_files:
- split: train
path: "data/et/train.parquet"
- split: validation
path: "data/et/validation.parquet"
- config_name: eu
data_files:
- split: train
path: "data/eu/train.parquet"
- split: validation
path: "data/eu/validation.parquet"
- config_name: ext
data_files:
- split: train
path: "data/ext/train.parquet"
- split: validation
path: "data/ext/validation.parquet"
- config_name: fa
data_files:
- split: train
path: "data/fa/train.parquet"
- split: validation
path: "data/fa/validation.parquet"
- config_name: fat
data_files:
- split: train
path: "data/fat/train.parquet"
- split: validation
path: "data/fat/validation.parquet"
- config_name: ff
data_files:
- split: train
path: "data/ff/train.parquet"
- split: validation
path: "data/ff/validation.parquet"
- config_name: fi
data_files:
- split: train
path: "data/fi/train.parquet"
- split: validation
path: "data/fi/validation.parquet"
- config_name: fiu_vro
data_files:
- split: train
path: "data/fiu_vro/train.parquet"
- split: validation
path: "data/fiu_vro/validation.parquet"
- config_name: fj
data_files:
- split: train
path: "data/fj/train.parquet"
- split: validation
path: "data/fj/validation.parquet"
- config_name: fo
data_files:
- split: train
path: "data/fo/train.parquet"
- split: validation
path: "data/fo/validation.parquet"
- config_name: fr
data_files:
- split: train
path: "data/fr/train.parquet"
- split: validation
path: "data/fr/validation.parquet"
- config_name: frp
data_files:
- split: train
path: "data/frp/train.parquet"
- split: validation
path: "data/frp/validation.parquet"
- config_name: frr
data_files:
- split: train
path: "data/frr/train.parquet"
- split: validation
path: "data/frr/validation.parquet"
- config_name: fur
data_files:
- split: train
path: "data/fur/train.parquet"
- split: validation
path: "data/fur/validation.parquet"
- config_name: fy
data_files:
- split: train
path: "data/fy/train.parquet"
- split: validation
path: "data/fy/validation.parquet"
- config_name: ga
data_files:
- split: train
path: "data/ga/train.parquet"
- split: validation
path: "data/ga/validation.parquet"
- config_name: gag
data_files:
- split: train
path: "data/gag/train.parquet"
- split: validation
path: "data/gag/validation.parquet"
- config_name: gan
data_files:
- split: train
path: "data/gan/train.parquet"
- split: validation
path: "data/gan/validation.parquet"
- config_name: gcr
data_files:
- split: train
path: "data/gcr/train.parquet"
- split: validation
path: "data/gcr/validation.parquet"
- config_name: gd
data_files:
- split: train
path: "data/gd/train.parquet"
- split: validation
path: "data/gd/validation.parquet"
- config_name: gl
data_files:
- split: train
path: "data/gl/train.parquet"
- split: validation
path: "data/gl/validation.parquet"
- config_name: glk
data_files:
- split: train
path: "data/glk/train.parquet"
- split: validation
path: "data/glk/validation.parquet"
- config_name: gn
data_files:
- split: train
path: "data/gn/train.parquet"
- split: validation
path: "data/gn/validation.parquet"
- config_name: gom
data_files:
- split: train
path: "data/gom/train.parquet"
- split: validation
path: "data/gom/validation.parquet"
- config_name: gor
data_files:
- split: train
path: "data/gor/train.parquet"
- split: validation
path: "data/gor/validation.parquet"
- config_name: got
data_files:
- split: train
path: "data/got/train.parquet"
- split: validation
path: "data/got/validation.parquet"
- config_name: gpe
data_files:
- split: train
path: "data/gpe/train.parquet"
- split: validation
path: "data/gpe/validation.parquet"
- config_name: gu
data_files:
- split: train
path: "data/gu/train.parquet"
- split: validation
path: "data/gu/validation.parquet"
- config_name: guc
data_files:
- split: train
path: "data/guc/train.parquet"
- split: validation
path: "data/guc/validation.parquet"
- config_name: gur
data_files:
- split: train
path: "data/gur/train.parquet"
- split: validation
path: "data/gur/validation.parquet"
- config_name: guw
data_files:
- split: train
path: "data/guw/train.parquet"
- split: validation
path: "data/guw/validation.parquet"
- config_name: gv
data_files:
- split: train
path: "data/gv/train.parquet"
- split: validation
path: "data/gv/validation.parquet"
- config_name: ha
data_files:
- split: train
path: "data/ha/train.parquet"
- split: validation
path: "data/ha/validation.parquet"
- config_name: hak
data_files:
- split: train
path: "data/hak/train.parquet"
- split: validation
path: "data/hak/validation.parquet"
- config_name: haw
data_files:
- split: train
path: "data/haw/train.parquet"
- split: validation
path: "data/haw/validation.parquet"
- config_name: he
data_files:
- split: train
path: "data/he/train.parquet"
- split: validation
path: "data/he/validation.parquet"
- config_name: hi
data_files:
- split: train
path: "data/hi/train.parquet"
- split: validation
path: "data/hi/validation.parquet"
- config_name: hif
data_files:
- split: train
path: "data/hif/train.parquet"
- split: validation
path: "data/hif/validation.parquet"
- config_name: hr
data_files:
- split: train
path: "data/hr/train.parquet"
- split: validation
path: "data/hr/validation.parquet"
- config_name: hsb
data_files:
- split: train
path: "data/hsb/train.parquet"
- split: validation
path: "data/hsb/validation.parquet"
- config_name: ht
data_files:
- split: train
path: "data/ht/train.parquet"
- split: validation
path: "data/ht/validation.parquet"
- config_name: hu
data_files:
- split: train
path: "data/hu/train.parquet"
- split: validation
path: "data/hu/validation.parquet"
- config_name: hy
data_files:
- split: train
path: "data/hy/train.parquet"
- split: validation
path: "data/hy/validation.parquet"
- config_name: hyw
data_files:
- split: train
path: "data/hyw/train.parquet"
- split: validation
path: "data/hyw/validation.parquet"
- config_name: ia
data_files:
- split: train
path: "data/ia/train.parquet"
- split: validation
path: "data/ia/validation.parquet"
- config_name: id
data_files:
- split: train
path: "data/id/train.parquet"
- split: validation
path: "data/id/validation.parquet"
- config_name: ie
data_files:
- split: train
path: "data/ie/train.parquet"
- split: validation
path: "data/ie/validation.parquet"
- config_name: ig
data_files:
- split: train
path: "data/ig/train.parquet"
- split: validation
path: "data/ig/validation.parquet"
- config_name: ik
data_files:
- split: train
path: "data/ik/train.parquet"
- split: validation
path: "data/ik/validation.parquet"
- config_name: ilo
data_files:
- split: train
path: "data/ilo/train.parquet"
- split: validation
path: "data/ilo/validation.parquet"
- config_name: inh
data_files:
- split: train
path: "data/inh/train.parquet"
- split: validation
path: "data/inh/validation.parquet"
- config_name: io
data_files:
- split: train
path: "data/io/train.parquet"
- split: validation
path: "data/io/validation.parquet"
- config_name: is
data_files:
- split: train
path: "data/is/train.parquet"
- split: validation
path: "data/is/validation.parquet"
- config_name: it
data_files:
- split: train
path: "data/it/train.parquet"
- split: validation
path: "data/it/validation.parquet"
- config_name: iu
data_files:
- split: train
path: "data/iu/train.parquet"
- split: validation
path: "data/iu/validation.parquet"
- config_name: ja
data_files:
- split: train
path: "data/ja/train.parquet"
- split: validation
path: "data/ja/validation.parquet"
- config_name: jam
data_files:
- split: train
path: "data/jam/train.parquet"
- split: validation
path: "data/jam/validation.parquet"
- config_name: jbo
data_files:
- split: train
path: "data/jbo/train.parquet"
- split: validation
path: "data/jbo/validation.parquet"
- config_name: jv
data_files:
- split: train
path: "data/jv/train.parquet"
- split: validation
path: "data/jv/validation.parquet"
- config_name: ka
data_files:
- split: train
path: "data/ka/train.parquet"
- split: validation
path: "data/ka/validation.parquet"
- config_name: kaa
data_files:
- split: train
path: "data/kaa/train.parquet"
- split: validation
path: "data/kaa/validation.parquet"
- config_name: kab
data_files:
- split: train
path: "data/kab/train.parquet"
- split: validation
path: "data/kab/validation.parquet"
- config_name: kbd
data_files:
- split: train
path: "data/kbd/train.parquet"
- split: validation
path: "data/kbd/validation.parquet"
- config_name: kbp
data_files:
- split: train
path: "data/kbp/train.parquet"
- split: validation
path: "data/kbp/validation.parquet"
- config_name: kcg
data_files:
- split: train
path: "data/kcg/train.parquet"
- split: validation
path: "data/kcg/validation.parquet"
- config_name: kg
data_files:
- split: train
path: "data/kg/train.parquet"
- split: validation
path: "data/kg/validation.parquet"
- config_name: ki
data_files:
- split: train
path: "data/ki/train.parquet"
- split: validation
path: "data/ki/validation.parquet"
- config_name: kk
data_files:
- split: train
path: "data/kk/train.parquet"
- split: validation
path: "data/kk/validation.parquet"
- config_name: kl
data_files:
- split: train
path: "data/kl/train.parquet"
- split: validation
path: "data/kl/validation.parquet"
- config_name: km
data_files:
- split: train
path: "data/km/train.parquet"
- split: validation
path: "data/km/validation.parquet"
- config_name: kn
data_files:
- split: train
path: "data/kn/train.parquet"
- split: validation
path: "data/kn/validation.parquet"
- config_name: ko
data_files:
- split: train
path: "data/ko/train.parquet"
- split: validation
path: "data/ko/validation.parquet"
- config_name: koi
data_files:
- split: train
path: "data/koi/train.parquet"
- split: validation
path: "data/koi/validation.parquet"
- config_name: krc
data_files:
- split: train
path: "data/krc/train.parquet"
- split: validation
path: "data/krc/validation.parquet"
- config_name: ks
data_files:
- split: train
path: "data/ks/train.parquet"
- split: validation
path: "data/ks/validation.parquet"
- config_name: ksh
data_files:
- split: train
path: "data/ksh/train.parquet"
- split: validation
path: "data/ksh/validation.parquet"
- config_name: ku
data_files:
- split: train
path: "data/ku/train.parquet"
- split: validation
path: "data/ku/validation.parquet"
- config_name: kv
data_files:
- split: train
path: "data/kv/train.parquet"
- split: validation
path: "data/kv/validation.parquet"
- config_name: kw
data_files:
- split: train
path: "data/kw/train.parquet"
- split: validation
path: "data/kw/validation.parquet"
- config_name: ky
data_files:
- split: train
path: "data/ky/train.parquet"
- split: validation
path: "data/ky/validation.parquet"
- config_name: la
data_files:
- split: train
path: "data/la/train.parquet"
- split: validation
path: "data/la/validation.parquet"
- config_name: lad
data_files:
- split: train
path: "data/lad/train.parquet"
- split: validation
path: "data/lad/validation.parquet"
- config_name: lb
data_files:
- split: train
path: "data/lb/train.parquet"
- split: validation
path: "data/lb/validation.parquet"
- config_name: lbe
data_files:
- split: train
path: "data/lbe/train.parquet"
- split: validation
path: "data/lbe/validation.parquet"
- config_name: lez
data_files:
- split: train
path: "data/lez/train.parquet"
- split: validation
path: "data/lez/validation.parquet"
- config_name: lfn
data_files:
- split: train
path: "data/lfn/train.parquet"
- split: validation
path: "data/lfn/validation.parquet"
- config_name: lg
data_files:
- split: train
path: "data/lg/train.parquet"
- split: validation
path: "data/lg/validation.parquet"
- config_name: li
data_files:
- split: train
path: "data/li/train.parquet"
- split: validation
path: "data/li/validation.parquet"
- config_name: lij
data_files:
- split: train
path: "data/lij/train.parquet"
- split: validation
path: "data/lij/validation.parquet"
- config_name: lld
data_files:
- split: train
path: "data/lld/train.parquet"
- split: validation
path: "data/lld/validation.parquet"
- config_name: lmo
data_files:
- split: train
path: "data/lmo/train.parquet"
- split: validation
path: "data/lmo/validation.parquet"
- config_name: ln
data_files:
- split: train
path: "data/ln/train.parquet"
- split: validation
path: "data/ln/validation.parquet"
- config_name: lo
data_files:
- split: train
path: "data/lo/train.parquet"
- split: validation
path: "data/lo/validation.parquet"
- config_name: lt
data_files:
- split: train
path: "data/lt/train.parquet"
- split: validation
path: "data/lt/validation.parquet"
- config_name: ltg
data_files:
- split: train
path: "data/ltg/train.parquet"
- split: validation
path: "data/ltg/validation.parquet"
- config_name: lv
data_files:
- split: train
path: "data/lv/train.parquet"
- split: validation
path: "data/lv/validation.parquet"
- config_name: mad
data_files:
- split: train
path: "data/mad/train.parquet"
- split: validation
path: "data/mad/validation.parquet"
- config_name: mai
data_files:
- split: train
path: "data/mai/train.parquet"
- split: validation
path: "data/mai/validation.parquet"
- config_name: map_bms
data_files:
- split: train
path: "data/map_bms/train.parquet"
- split: validation
path: "data/map_bms/validation.parquet"
- config_name: mdf
data_files:
- split: train
path: "data/mdf/train.parquet"
- split: validation
path: "data/mdf/validation.parquet"
- config_name: mg
data_files:
- split: train
path: "data/mg/train.parquet"
- split: validation
path: "data/mg/validation.parquet"
- config_name: mhr
data_files:
- split: train
path: "data/mhr/train.parquet"
- split: validation
path: "data/mhr/validation.parquet"
- config_name: mi
data_files:
- split: train
path: "data/mi/train.parquet"
- split: validation
path: "data/mi/validation.parquet"
- config_name: min
data_files:
- split: train
path: "data/min/train.parquet"
- split: validation
path: "data/min/validation.parquet"
- config_name: mk
data_files:
- split: train
path: "data/mk/train.parquet"
- split: validation
path: "data/mk/validation.parquet"
- config_name: ml
data_files:
- split: train
path: "data/ml/train.parquet"
- split: validation
path: "data/ml/validation.parquet"
- config_name: mn
data_files:
- split: train
path: "data/mn/train.parquet"
- split: validation
path: "data/mn/validation.parquet"
- config_name: mni
data_files:
- split: train
path: "data/mni/train.parquet"
- split: validation
path: "data/mni/validation.parquet"
- config_name: mnw
data_files:
- split: train
path: "data/mnw/train.parquet"
- split: validation
path: "data/mnw/validation.parquet"
- config_name: mr
data_files:
- split: train
path: "data/mr/train.parquet"
- split: validation
path: "data/mr/validation.parquet"
- config_name: mrj
data_files:
- split: train
path: "data/mrj/train.parquet"
- split: validation
path: "data/mrj/validation.parquet"
- config_name: ms
data_files:
- split: train
path: "data/ms/train.parquet"
- split: validation
path: "data/ms/validation.parquet"
- config_name: mt
data_files:
- split: train
path: "data/mt/train.parquet"
- split: validation
path: "data/mt/validation.parquet"
- config_name: mwl
data_files:
- split: train
path: "data/mwl/train.parquet"
- split: validation
path: "data/mwl/validation.parquet"
- config_name: my
data_files:
- split: train
path: "data/my/train.parquet"
- split: validation
path: "data/my/validation.parquet"
- config_name: myv
data_files:
- split: train
path: "data/myv/train.parquet"
- split: validation
path: "data/myv/validation.parquet"
- config_name: mzn
data_files:
- split: train
path: "data/mzn/train.parquet"
- split: validation
path: "data/mzn/validation.parquet"
- config_name: nah
data_files:
- split: train
path: "data/nah/train.parquet"
- split: validation
path: "data/nah/validation.parquet"
- config_name: nap
data_files:
- split: train
path: "data/nap/train.parquet"
- split: validation
path: "data/nap/validation.parquet"
- config_name: nds
data_files:
- split: train
path: "data/nds/train.parquet"
- split: validation
path: "data/nds/validation.parquet"
- config_name: nds_nl
data_files:
- split: train
path: "data/nds_nl/train.parquet"
- split: validation
path: "data/nds_nl/validation.parquet"
- config_name: ne
data_files:
- split: train
path: "data/ne/train.parquet"
- split: validation
path: "data/ne/validation.parquet"
- config_name: new
data_files:
- split: train
path: "data/new/train.parquet"
- split: validation
path: "data/new/validation.parquet"
- config_name: nia
data_files:
- split: train
path: "data/nia/train.parquet"
- split: validation
path: "data/nia/validation.parquet"
- config_name: nl
data_files:
- split: train
path: "data/nl/train.parquet"
- split: validation
path: "data/nl/validation.parquet"
- config_name: nn
data_files:
- split: train
path: "data/nn/train.parquet"
- split: validation
path: "data/nn/validation.parquet"
- config_name: 'no'
data_files:
- split: train
path: "data/no/train.parquet"
- split: validation
path: "data/no/validation.parquet"
- config_name: nov
data_files:
- split: train
path: "data/nov/train.parquet"
- split: validation
path: "data/nov/validation.parquet"
- config_name: nqo
data_files:
- split: train
path: "data/nqo/train.parquet"
- split: validation
path: "data/nqo/validation.parquet"
- config_name: nrm
data_files:
- split: train
path: "data/nrm/train.parquet"
- split: validation
path: "data/nrm/validation.parquet"
- config_name: nso
data_files:
- split: train
path: "data/nso/train.parquet"
- split: validation
path: "data/nso/validation.parquet"
- config_name: nv
data_files:
- split: train
path: "data/nv/train.parquet"
- split: validation
path: "data/nv/validation.parquet"
- config_name: ny
data_files:
- split: train
path: "data/ny/train.parquet"
- split: validation
path: "data/ny/validation.parquet"
- config_name: oc
data_files:
- split: train
path: "data/oc/train.parquet"
- split: validation
path: "data/oc/validation.parquet"
- config_name: olo
data_files:
- split: train
path: "data/olo/train.parquet"
- split: validation
path: "data/olo/validation.parquet"
- config_name: om
data_files:
- split: train
path: "data/om/train.parquet"
- split: validation
path: "data/om/validation.parquet"
- config_name: or
data_files:
- split: train
path: "data/or/train.parquet"
- split: validation
path: "data/or/validation.parquet"
- config_name: os
data_files:
- split: train
path: "data/os/train.parquet"
- split: validation
path: "data/os/validation.parquet"
- config_name: pa
data_files:
- split: train
path: "data/pa/train.parquet"
- split: validation
path: "data/pa/validation.parquet"
- config_name: pag
data_files:
- split: train
path: "data/pag/train.parquet"
- split: validation
path: "data/pag/validation.parquet"
- config_name: pam
data_files:
- split: train
path: "data/pam/train.parquet"
- split: validation
path: "data/pam/validation.parquet"
- config_name: pap
data_files:
- split: train
path: "data/pap/train.parquet"
- split: validation
path: "data/pap/validation.parquet"
- config_name: pcd
data_files:
- split: train
path: "data/pcd/train.parquet"
- split: validation
path: "data/pcd/validation.parquet"
- config_name: pcm
data_files:
- split: train
path: "data/pcm/train.parquet"
- split: validation
path: "data/pcm/validation.parquet"
- config_name: pdc
data_files:
- split: train
path: "data/pdc/train.parquet"
- split: validation
path: "data/pdc/validation.parquet"
- config_name: pfl
data_files:
- split: train
path: "data/pfl/train.parquet"
- split: validation
path: "data/pfl/validation.parquet"
- config_name: pi
data_files:
- split: train
path: "data/pi/train.parquet"
- split: validation
path: "data/pi/validation.parquet"
- config_name: pih
data_files:
- split: train
path: "data/pih/train.parquet"
- split: validation
path: "data/pih/validation.parquet"
- config_name: pl
data_files:
- split: train
path: "data/pl/train.parquet"
- split: validation
path: "data/pl/validation.parquet"
- config_name: pms
data_files:
- split: train
path: "data/pms/train.parquet"
- split: validation
path: "data/pms/validation.parquet"
- config_name: pnb
data_files:
- split: train
path: "data/pnb/train.parquet"
- split: validation
path: "data/pnb/validation.parquet"
- config_name: pnt
data_files:
- split: train
path: "data/pnt/train.parquet"
- split: validation
path: "data/pnt/validation.parquet"
- config_name: ps
data_files:
- split: train
path: "data/ps/train.parquet"
- split: validation
path: "data/ps/validation.parquet"
- config_name: pt
data_files:
- split: train
path: "data/pt/train.parquet"
- split: validation
path: "data/pt/validation.parquet"
- config_name: pwn
data_files:
- split: train
path: "data/pwn/train.parquet"
- split: validation
path: "data/pwn/validation.parquet"
- config_name: qu
data_files:
- split: train
path: "data/qu/train.parquet"
- split: validation
path: "data/qu/validation.parquet"
- config_name: rm
data_files:
- split: train
path: "data/rm/train.parquet"
- split: validation
path: "data/rm/validation.parquet"
- config_name: rmy
data_files:
- split: train
path: "data/rmy/train.parquet"
- split: validation
path: "data/rmy/validation.parquet"
- config_name: rn
data_files:
- split: train
path: "data/rn/train.parquet"
- split: validation
path: "data/rn/validation.parquet"
- config_name: ro
data_files:
- split: train
path: "data/ro/train.parquet"
- split: validation
path: "data/ro/validation.parquet"
- config_name: roa_rup
data_files:
- split: train
path: "data/roa_rup/train.parquet"
- split: validation
path: "data/roa_rup/validation.parquet"
- config_name: roa_tara
data_files:
- split: train
path: "data/roa_tara/train.parquet"
- split: validation
path: "data/roa_tara/validation.parquet"
- config_name: ru
data_files:
- split: train
path: "data/ru/train.parquet"
- split: validation
path: "data/ru/validation.parquet"
- config_name: rue
data_files:
- split: train
path: "data/rue/train.parquet"
- split: validation
path: "data/rue/validation.parquet"
- config_name: rw
data_files:
- split: train
path: "data/rw/train.parquet"
- split: validation
path: "data/rw/validation.parquet"
- config_name: sa
data_files:
- split: train
path: "data/sa/train.parquet"
- split: validation
path: "data/sa/validation.parquet"
- config_name: sah
data_files:
- split: train
path: "data/sah/train.parquet"
- split: validation
path: "data/sah/validation.parquet"
- config_name: sat
data_files:
- split: train
path: "data/sat/train.parquet"
- split: validation
path: "data/sat/validation.parquet"
- config_name: sc
data_files:
- split: train
path: "data/sc/train.parquet"
- split: validation
path: "data/sc/validation.parquet"
- config_name: scn
data_files:
- split: train
path: "data/scn/train.parquet"
- split: validation
path: "data/scn/validation.parquet"
- config_name: sco
data_files:
- split: train
path: "data/sco/train.parquet"
- split: validation
path: "data/sco/validation.parquet"
- config_name: sd
data_files:
- split: train
path: "data/sd/train.parquet"
- split: validation
path: "data/sd/validation.parquet"
- config_name: se
data_files:
- split: train
path: "data/se/train.parquet"
- split: validation
path: "data/se/validation.parquet"
- config_name: sg
data_files:
- split: train
path: "data/sg/train.parquet"
- split: validation
path: "data/sg/validation.parquet"
- config_name: sh
data_files:
- split: train
path: "data/sh/train.parquet"
- split: validation
path: "data/sh/validation.parquet"
- config_name: shi
data_files:
- split: train
path: "data/shi/train.parquet"
- split: validation
path: "data/shi/validation.parquet"
- config_name: shn
data_files:
- split: train
path: "data/shn/train.parquet"
- split: validation
path: "data/shn/validation.parquet"
- config_name: si
data_files:
- split: train
path: "data/si/train.parquet"
- split: validation
path: "data/si/validation.parquet"
- config_name: simple
data_files:
- split: train
path: "data/simple/train.parquet"
- split: validation
path: "data/simple/validation.parquet"
- config_name: sk
data_files:
- split: train
path: "data/sk/train.parquet"
- split: validation
path: "data/sk/validation.parquet"
- config_name: skr
data_files:
- split: train
path: "data/skr/train.parquet"
- split: validation
path: "data/skr/validation.parquet"
- config_name: sl
data_files:
- split: train
path: "data/sl/train.parquet"
- split: validation
path: "data/sl/validation.parquet"
- config_name: sm
data_files:
- split: train
path: "data/sm/train.parquet"
- split: validation
path: "data/sm/validation.parquet"
- config_name: smn
data_files:
- split: train
path: "data/smn/train.parquet"
- split: validation
path: "data/smn/validation.parquet"
- config_name: sn
data_files:
- split: train
path: "data/sn/train.parquet"
- split: validation
path: "data/sn/validation.parquet"
- config_name: so
data_files:
- split: train
path: "data/so/train.parquet"
- split: validation
path: "data/so/validation.parquet"
- config_name: sq
data_files:
- split: train
path: "data/sq/train.parquet"
- split: validation
path: "data/sq/validation.parquet"
- config_name: sr
data_files:
- split: train
path: "data/sr/train.parquet"
- split: validation
path: "data/sr/validation.parquet"
- config_name: srn
data_files:
- split: train
path: "data/srn/train.parquet"
- split: validation
path: "data/srn/validation.parquet"
- config_name: ss
data_files:
- split: train
path: "data/ss/train.parquet"
- split: validation
path: "data/ss/validation.parquet"
- config_name: st
data_files:
- split: train
path: "data/st/train.parquet"
- split: validation
path: "data/st/validation.parquet"
- config_name: stq
data_files:
- split: train
path: "data/stq/train.parquet"
- split: validation
path: "data/stq/validation.parquet"
- config_name: su
data_files:
- split: train
path: "data/su/train.parquet"
- split: validation
path: "data/su/validation.parquet"
- config_name: sv
data_files:
- split: train
path: "data/sv/train.parquet"
- split: validation
path: "data/sv/validation.parquet"
- config_name: sw
data_files:
- split: train
path: "data/sw/train.parquet"
- split: validation
path: "data/sw/validation.parquet"
- config_name: szl
data_files:
- split: train
path: "data/szl/train.parquet"
- split: validation
path: "data/szl/validation.parquet"
- config_name: szy
data_files:
- split: train
path: "data/szy/train.parquet"
- split: validation
path: "data/szy/validation.parquet"
- config_name: ta
data_files:
- split: train
path: "data/ta/train.parquet"
- split: validation
path: "data/ta/validation.parquet"
- config_name: tay
data_files:
- split: train
path: "data/tay/train.parquet"
- split: validation
path: "data/tay/validation.parquet"
- config_name: tcy
data_files:
- split: train
path: "data/tcy/train.parquet"
- split: validation
path: "data/tcy/validation.parquet"
- config_name: te
data_files:
- split: train
path: "data/te/train.parquet"
- split: validation
path: "data/te/validation.parquet"
- config_name: tet
data_files:
- split: train
path: "data/tet/train.parquet"
- split: validation
path: "data/tet/validation.parquet"
- config_name: tg
data_files:
- split: train
path: "data/tg/train.parquet"
- split: validation
path: "data/tg/validation.parquet"
- config_name: th
data_files:
- split: train
path: "data/th/train.parquet"
- split: validation
path: "data/th/validation.parquet"
- config_name: ti
data_files:
- split: train
path: "data/ti/train.parquet"
- split: validation
path: "data/ti/validation.parquet"
- config_name: tk
data_files:
- split: train
path: "data/tk/train.parquet"
- split: validation
path: "data/tk/validation.parquet"
- config_name: tl
data_files:
- split: train
path: "data/tl/train.parquet"
- split: validation
path: "data/tl/validation.parquet"
- config_name: tn
data_files:
- split: train
path: "data/tn/train.parquet"
- split: validation
path: "data/tn/validation.parquet"
- config_name: to
data_files:
- split: train
path: "data/to/train.parquet"
- split: validation
path: "data/to/validation.parquet"
- config_name: tpi
data_files:
- split: train
path: "data/tpi/train.parquet"
- split: validation
path: "data/tpi/validation.parquet"
- config_name: tr
data_files:
- split: train
path: "data/tr/train.parquet"
- split: validation
path: "data/tr/validation.parquet"
- config_name: trv
data_files:
- split: train
path: "data/trv/train.parquet"
- split: validation
path: "data/trv/validation.parquet"
- config_name: ts
data_files:
- split: train
path: "data/ts/train.parquet"
- split: validation
path: "data/ts/validation.parquet"
- config_name: tt
data_files:
- split: train
path: "data/tt/train.parquet"
- split: validation
path: "data/tt/validation.parquet"
- config_name: tum
data_files:
- split: train
path: "data/tum/train.parquet"
- split: validation
path: "data/tum/validation.parquet"
- config_name: tw
data_files:
- split: train
path: "data/tw/train.parquet"
- split: validation
path: "data/tw/validation.parquet"
- config_name: ty
data_files:
- split: train
path: "data/ty/train.parquet"
- split: validation
path: "data/ty/validation.parquet"
- config_name: tyv
data_files:
- split: train
path: "data/tyv/train.parquet"
- split: validation
path: "data/tyv/validation.parquet"
- config_name: udm
data_files:
- split: train
path: "data/udm/train.parquet"
- split: validation
path: "data/udm/validation.parquet"
- config_name: ug
data_files:
- split: train
path: "data/ug/train.parquet"
- split: validation
path: "data/ug/validation.parquet"
- config_name: uk
data_files:
- split: train
path: "data/uk/train.parquet"
- split: validation
path: "data/uk/validation.parquet"
- config_name: ur
data_files:
- split: train
path: "data/ur/train.parquet"
- split: validation
path: "data/ur/validation.parquet"
- config_name: uz
data_files:
- split: train
path: "data/uz/train.parquet"
- split: validation
path: "data/uz/validation.parquet"
- config_name: ve
data_files:
- split: train
path: "data/ve/train.parquet"
- split: validation
path: "data/ve/validation.parquet"
- config_name: vec
data_files:
- split: train
path: "data/vec/train.parquet"
- split: validation
path: "data/vec/validation.parquet"
- config_name: vep
data_files:
- split: train
path: "data/vep/train.parquet"
- split: validation
path: "data/vep/validation.parquet"
- config_name: vi
data_files:
- split: train
path: "data/vi/train.parquet"
- split: validation
path: "data/vi/validation.parquet"
- config_name: vls
data_files:
- split: train
path: "data/vls/train.parquet"
- split: validation
path: "data/vls/validation.parquet"
- config_name: vo
data_files:
- split: train
path: "data/vo/train.parquet"
- split: validation
path: "data/vo/validation.parquet"
- config_name: wa
data_files:
- split: train
path: "data/wa/train.parquet"
- split: validation
path: "data/wa/validation.parquet"
- config_name: war
data_files:
- split: train
path: "data/war/train.parquet"
- split: validation
path: "data/war/validation.parquet"
- config_name: wo
data_files:
- split: train
path: "data/wo/train.parquet"
- split: validation
path: "data/wo/validation.parquet"
- config_name: wuu
data_files:
- split: train
path: "data/wuu/train.parquet"
- split: validation
path: "data/wuu/validation.parquet"
- config_name: xal
data_files:
- split: train
path: "data/xal/train.parquet"
- split: validation
path: "data/xal/validation.parquet"
- config_name: xh
data_files:
- split: train
path: "data/xh/train.parquet"
- split: validation
path: "data/xh/validation.parquet"
- config_name: xmf
data_files:
- split: train
path: "data/xmf/train.parquet"
- split: validation
path: "data/xmf/validation.parquet"
- config_name: yi
data_files:
- split: train
path: "data/yi/train.parquet"
- split: validation
path: "data/yi/validation.parquet"
- config_name: yo
data_files:
- split: train
path: "data/yo/train.parquet"
- split: validation
path: "data/yo/validation.parquet"
- config_name: za
data_files:
- split: train
path: "data/za/train.parquet"
- split: validation
path: "data/za/validation.parquet"
- config_name: zea
data_files:
- split: train
path: "data/zea/train.parquet"
- split: validation
path: "data/zea/validation.parquet"
- config_name: zh
data_files:
- split: train
path: "data/zh/train.parquet"
- split: validation
path: "data/zh/validation.parquet"
- config_name: zh_classical
data_files:
- split: train
path: "data/zh_classical/train.parquet"
- split: validation
path: "data/zh_classical/validation.parquet"
- config_name: zh_min_nan
data_files:
- split: train
path: "data/zh_min_nan/train.parquet"
- split: validation
path: "data/zh_min_nan/validation.parquet"
- config_name: zh_yue
data_files:
- split: train
path: "data/zh_yue/train.parquet"
- split: validation
path: "data/zh_yue/validation.parquet"
- config_name: zu
data_files:
- split: train
path: "data/zu/train.parquet"
- split: validation
path: "data/zu/validation.parquet"
---
# Dataset Card for WikiAnc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [WikiAnc repository](https://github.com/cyanic-selkie/wikianc)
### Dataset Summary
The WikiAnc dataset is an automatically generated dataset from Wikipedia (all languages) and Wikidata dumps (August, 2023).
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/wikianc).
### Supported Tasks
- `wikificiation`: The dataset can be used to train a model for Wikification.
- `named-entity-linking`: The dataset can be used to train a model for Named Entity Linking.
### Languages
The text in the dataset is in all 320 Wikipedia languages. The full list can be found in the table below.
## Dataset Structure
### Data Instances
A typical data point represents a paragraph in a Wikipedia article.
The `paragraph_text` field contains the original text in an NFC normalized, UTF-8 encoded string.
The `paragraph_anchors` field contains a list of anchors, each represented by a struct with the inclusive starting UTF-8 code point `start` field, exclusive ending UTF-8 code point `end` field, a nullable `qid` field, a nullable `pageid` field, and an NFC normalized, UTF-8 encoded `title` (Wikipedia) field.
Additionally, each paragraph has `article_title`, `article_pageid`, and (nullable) `article_qid` fields referring to the article the paragraph came from.
There is also a nullable, NFC normalized, UTF-8 encoded `section_heading` field, and an integer `section_level` field referring to the heading (if it exists) of the article section, and the level in the section hierarchy that the paragraph came from.
The `qid` fields refers to Wikidata's QID identifiers, while the `pageid` and `title` fields refer to Wikipedia's pageID and title identifiers (there is a one-to-one mapping between pageIDs and titles).
**NOTE:** An anchor will always have a `title`, but that doesn't mean it has to have a `pageid`. This is because Wikipedia allows defining anchors to nonexistent articles.
An example from the WikiAnc EN test set looks as follows:
```
{
"uuid": "5f74e678-944f-4761-a5e0-b6426f6f61b8",
"article_title": "Climatius",
"article_pageid": 5394373,
"article_qid": 867987,
"section_heading": null,
"section_level": 0,
"paragraph_text": "It was a small fish, at 7.5 cm, and to discourage predators, Climatius sported fifteen sharp spines. There was one spine each on the paired pelvic and pectoral fins, and on the aingle anal and two dorsal fins, and a four pairs without fins on the fish's underside.",
"paragraph_anchors": [
{
"start": 140,
"end": 146,
"qid": 3335089,
"pageid": 56849833,
"title": "Pelvic_fin"
},
{
"start": 151,
"end": 159,
"qid": 4162555,
"pageid": 331956,
"title": "Pectoral_fin"
},
{
"start": 184,
"end": 188,
"qid": 4162555,
"pageid": 331958,
"title": "Anal_fin"
},
{
"start": 197,
"end": 208,
"qid": 1568355,
"pageid": 294244,
"title": "Dorsal_fin"
}
]
}
```
### Data Fields
- `uuid`: a UTF-8 encoded string representing a v4 UUID that uniquely identifies the example
- `article_title`: an NFC normalized, UTF-8 encoded Wikipedia title of the article; spaces are replaced with underscores
- `article_pageid`: an integer representing the Wikipedia pageID of the article
- `article_qid`: an integer representing the Wikidata QID this article refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `section_heading`: a nullable, NFC normalized, UTF-8 encoded string representing the section heading
- `section_level`: an integer representing the level of the section in the section hierarchy
- `paragraph_text`: an NFC normalized, UTF-8 encoded string representing the paragraph
- `paragraph_anchors`: a list of structs representing anchors, each anchor has:
- `start`: an integer representing the inclusive starting UTF-8 code point of the anchors
- `end`: an integer representing the exclusive ending UTF-8 code point of the anchor
- `qid`: a nullable integer representing the Wikidata QID this anchor refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `pageid`: a nullable integer representing the Wikipedia pageID of the anchor; it can be null if the article didn't exist in Wikipedia at the time of the creation of the original dataset
- `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title of the anchor; spaces are replaced with underscores; can refer to a nonexistent Wikipedia article
### Data Splits
The data is split into training, validation and test sets; paragraphs belonging to the same article aren't necessarily in the same split. The final split sizes are as follows:
#### Train
| | Articles | Paragraphs | Anchors | Anchors with QIDs | Anchors with PageIDs |
| :-- | --: | --: | --: | --: | --: |
| ab | 2378 | 5678 | 10515 | 3649 | 3650 |
| ace | 12591 | 23969 | 48638 | 25150 | 25175 |
| ady | 596 | 1662 | 2694 | 1593 | 1606 |
| af | 104470 | 399038 | 985640 | 900596 | 900967 |
| als | 27999 | 165085 | 402049 | 294742 | 294744 |
| alt | 1043 | 7468 | 9158 | 5446 | 5452 |
| am | 13576 | 46318 | 90051 | 51915 | 52173 |
| ami | 1582 | 12428 | 6080 | 1505 | 2579 |
| an | 40179 | 121367 | 669830 | 516248 | 516822 |
| ang | 3833 | 9664 | 24297 | 10189 | 10229 |
| anp | 2506 | 6865 | 14560 | 3825 | 5061 |
| ar | 1132271 | 3617491 | 11657228 | 11240112 | 11244160 |
| arc | 1844 | 3766 | 9232 | 5460 | 5545 |
| ary | 6736 | 17049 | 50185 | 34193 | 34227 |
| arz | 1579782 | 3693549 | 7879303 | 6906799 | 6917393 |
| as | 11947 | 77835 | 122760 | 67594 | 67720 |
| ast | 126992 | 877278 | 2952000 | 1775764 | 1777383 |
| atj | 1872 | 3820 | 6544 | 3247 | 3365 |
| av | 3048 | 8542 | 16115 | 8895 | 9000 |
| avk | 27577 | 85219 | 106100 | 32260 | 33491 |
| awa | 3396 | 5802 | 6617 | 1679 | 2370 |
| ay | 5102 | 15125 | 22802 | 13930 | 13933 |
| az | 180810 | 789902 | 1570889 | 1377797 | 1380325 |
| azb | 240990 | 585386 | 1241661 | 749575 | 753318 |
| ba | 62269 | 391926 | 625645 | 562730 | 563181 |
| ban | 18955 | 44138 | 86239 | 66213 | 66412 |
| bar | 26057 | 83298 | 185158 | 109082 | 109091 |
| bat_smg | 17013 | 41951 | 77417 | 51701 | 51733 |
| bcl | 13783 | 45457 | 78963 | 47819 | 47861 |
| be | 222883 | 821135 | 2499258 | 2204062 | 2204117 |
| bg | 285156 | 1336530 | 3967713 | 3618800 | 3627798 |
| bh | 7658 | 17052 | 29110 | 22157 | 22217 |
| bi | 1403 | 1712 | 3172 | 1991 | 1995 |
| bjn | 9672 | 19007 | 58660 | 32538 | 33071 |
| blk | 2786 | 11825 | 11341 | 5979 | 6129 |
| bm | 1111 | 2421 | 2451 | 1217 | 1218 |
| bn | 136921 | 736388 | 1530942 | 1161967 | 1162761 |
| bo | 11843 | 37121 | 8241 | 6265 | 6359 |
| bpy | 24742 | 115606 | 166906 | 86166 | 86170 |
| br | 78524 | 214128 | 657375 | 527295 | 527606 |
| bs | 86407 | 382114 | 1246030 | 965782 | 966511 |
| bug | 14231 | 14484 | 53879 | 14787 | 15146 |
| bxr | 2730 | 9571 | 27853 | 11560 | 11567 |
| ca | 691444 | 3596667 | 11359870 | 10236358 | 10237666 |
| cbk_zam | 2989 | 8322 | 9939 | 2790 | 2847 |
| cdo | 15922 | 30059 | 63474 | 29659 | 29705 |
| ce | 597137 | 2121587 | 3097393 | 1507129 | 1507806 |
| ceb | 5888811 | 11920613 | 37969424 | 33678489 | 33962205 |
| ch | 574 | 1166 | 2290 | 492 | 601 |
| chr | 980 | 1110 | 1311 | 779 | 790 |
| chy | 711 | 753 | 494 | 428 | 428 |
| ckb | 48903 | 163599 | 435662 | 224749 | 226749 |
| co | 6719 | 22954 | 46391 | 24149 | 24229 |
| cr | 158 | 216 | 209 | 94 | 94 |
| crh | 24117 | 29781 | 98534 | 70231 | 70235 |
| cs | 516037 | 2679537 | 9917806 | 8763103 | 8763291 |
| csb | 5315 | 14009 | 31294 | 16820 | 16820 |
| cu | 1171 | 2796 | 5283 | 2346 | 2349 |
| cv | 50525 | 157542 | 375399 | 166889 | 167497 |
| cy | 276031 | 992900 | 2011030 | 1613064 | 1620632 |
| da | 284765 | 1167917 | 4352733 | 3854239 | 3854549 |
| dag | 9248 | 29213 | 46084 | 10981 | 14213 |
| de | 2780056 | 16093948 | 52497421 | 50480495 | 50480548 |
| din | 485 | 1551 | 1096 | 197 | 197 |
| diq | 37565 | 70969 | 155656 | 141636 | 141695 |
| dsb | 3083 | 8760 | 19397 | 9652 | 9652 |
| dty | 3339 | 6219 | 7505 | 4417 | 4447 |
| dv | 4190 | 16809 | 7906 | 3612 | 3620 |
| dz | 652 | 2623 | 272 | 94 | 100 |
| ee | 1075 | 2326 | 1823 | 861 | 926 |
| el | 224207 | 1527561 | 4181433 | 3119952 | 3121967 |
| eml | 12169 | 53861 | 115729 | 65775 | 65940 |
| en | 6514924 | 40656507 | 109681826 | 107761324 | 107768438 |
| eo | 330486 | 1116191 | 4257655 | 3975927 | 3979379 |
| es | 1792062 | 10890435 | 33729712 | 31581851 | 31648945 |
| et | 233078 | 1110906 | 3558448 | 2879595 | 2886824 |
| eu | 386029 | 1405747 | 3398477 | 3025183 | 3030635 |
| ext | 3472 | 9626 | 20554 | 11966 | 11978 |
| fa | 901254 | 2357271 | 6189352 | 5862106 | 5870803 |
| fat | 1044 | 6092 | 1717 | 120 | 857 |
| ff | 1763 | 4103 | 3483 | 2304 | 2413 |
| fi | 373226 | 1667296 | 5221239 | 4658292 | 4663471 |
| fiu_vro | 6417 | 19897 | 40418 | 23563 | 23609 |
| fj | 1157 | 1782 | 4852 | 1910 | 1911 |
| fo | 11809 | 30828 | 119267 | 95117 | 95259 |
| fr | 2432972 | 15252697 | 43564517 | 42573624 | 42589064 |
| frp | 5341 | 10574 | 36358 | 24905 | 24926 |
| frr | 16038 | 30821 | 80265 | 68184 | 68315 |
| fur | 3665 | 10651 | 29516 | 16249 | 16278 |
| fy | 46011 | 206153 | 1271339 | 985227 | 985511 |
| ga | 52168 | 130535 | 347037 | 288261 | 288309 |
| gag | 2408 | 4844 | 8551 | 4520 | 4520 |
| gan | 4219 | 9689 | 18994 | 14119 | 14128 |
| gcr | 2227 | 5163 | 2763 | 1186 | 1186 |
| gd | 15850 | 48217 | 141290 | 95557 | 95562 |
| gl | 190419 | 910543 | 3674404 | 2937660 | 2938634 |
| glk | 6484 | 15344 | 32631 | 21395 | 21447 |
| gn | 5064 | 15481 | 40641 | 30389 | 30440 |
| gom | 4192 | 37508 | 14192 | 2369 | 2382 |
| gor | 14388 | 28133 | 107341 | 66191 | 67016 |
| got | 960 | 2186 | 4093 | 1404 | 1415 |
| gpe | 899 | 3383 | 1199 | 796 | 815 |
| gu | 30025 | 114805 | 459063 | 348651 | 348731 |
| guc | 546 | 2545 | 2300 | 1025 | 1138 |
| gur | 1010 | 5043 | 1761 | 227 | 244 |
| guw | 1263 | 3719 | 7474 | 3116 | 5375 |
| gv | 5036 | 12213 | 48801 | 19659 | 19663 |
| ha | 31977 | 149096 | 115029 | 97167 | 98184 |
| hak | 8694 | 11505 | 39744 | 28150 | 28152 |
| haw | 2470 | 5810 | 11169 | 5700 | 5705 |
| he | 323472 | 2648617 | 10904148 | 10367532 | 10379886 |
| hi | 150121 | 538451 | 964251 | 795726 | 798254 |
| hif | 10534 | 21169 | 43463 | 23970 | 24316 |
| hr | 189415 | 876107 | 3210326 | 2752205 | 2758602 |
| hsb | 13183 | 40760 | 91863 | 66632 | 66633 |
| ht | 64850 | 154160 | 201547 | 166206 | 167961 |
| hu | 346711 | 1859683 | 5267990 | 4707580 | 4710525 |
| hy | 298066 | 1542920 | 3767938 | 2689014 | 2690466 |
| hyw | 11358 | 83640 | 161227 | 82218 | 84817 |
| ia | 24581 | 43289 | 129914 | 96517 | 96595 |
| id | 620895 | 2138237 | 6589957 | 5629372 | 5644832 |
| ie | 11020 | 22342 | 60890 | 46054 | 46122 |
| ig | 19448 | 110907 | 57963 | 31022 | 31298 |
| ik | 737 | 1016 | 848 | 551 | 580 |
| ilo | 14135 | 74304 | 126533 | 75701 | 75705 |
| inh | 1754 | 4640 | 13284 | 5770 | 6011 |
| io | 36312 | 101555 | 303765 | 258933 | 259001 |
| is | 54348 | 170321 | 574897 | 436767 | 437784 |
| it | 1610989 | 8718610 | 27447754 | 26116131 | 26126157 |
| iu | 502 | 757 | 536 | 414 | 418 |
| ja | 1355269 | 9276459 | 29002111 | 27752954 | 27801000 |
| jam | 1571 | 2260 | 5887 | 3588 | 3590 |
| jbo | 1287 | 3088 | 5831 | 546 | 546 |
| jv | 66323 | 148710 | 547010 | 381682 | 382052 |
| ka | 167161 | 695865 | 2275552 | 422090 | 422095 |
| kaa | 3540 | 9814 | 12930 | 5312 | 5752 |
| kab | 5346 | 14709 | 36889 | 22000 | 22050 |
| kbd | 1549 | 6348 | 14594 | 5277 | 5280 |
| kbp | 1846 | 6005 | 7119 | 6875 | 6880 |
| kcg | 871 | 1839 | 2953 | 1857 | 1871 |
| kg | 1187 | 1933 | 3835 | 2292 | 2295 |
| ki | 1482 | 2899 | 2035 | 1386 | 1649 |
| kk | 235740 | 889990 | 1840304 | 1143049 | 1151399 |
| kl | 282 | 1024 | 1337 | 302 | 302 |
| km | 11422 | 84697 | 111378 | 40954 | 41529 |
| kn | 30729 | 261724 | 432994 | 188536 | 188807 |
| ko | 606386 | 2159706 | 6217786 | 5715559 | 5725614 |
| koi | 3260 | 9065 | 17068 | 10628 | 10628 |
| krc | 1465 | 6234 | 18092 | 7294 | 7311 |
| ks | 4176 | 9446 | 15252 | 5917 | 6226 |
| ksh | 2836 | 11043 | 26577 | 9484 | 9496 |
| ku | 55166 | 112840 | 269080 | 208679 | 210304 |
| kv | 5236 | 13396 | 32141 | 26727 | 26744 |
| kw | 6884 | 18901 | 49462 | 28074 | 28194 |
| ky | 75426 | 191772 | 271376 | 189656 | 190133 |
| la | 124150 | 240343 | 1456464 | 1283285 | 1283728 |
| lad | 3538 | 11910 | 37456 | 19124 | 19124 |
| lb | 57747 | 178507 | 573528 | 443583 | 444601 |
| lbe | 1205 | 2249 | 4470 | 2543 | 2543 |
| lez | 4067 | 16675 | 36970 | 25834 | 25842 |
| lfn | 4506 | 21746 | 29785 | 14554 | 14560 |
| lg | 3814 | 23386 | 15539 | 2088 | 2724 |
| li | 14134 | 58711 | 212772 | 137110 | 137367 |
| lij | 8092 | 23366 | 61410 | 34939 | 34940 |
| lld | 152613 | 158049 | 578033 | 443976 | 458150 |
| lmo | 67387 | 136650 | 373890 | 274174 | 274612 |
| ln | 3132 | 6066 | 11086 | 7838 | 7874 |
| lo | 4734 | 15005 | 27132 | 8562 | 8799 |
| lt | 204135 | 775863 | 2687983 | 2406710 | 2414909 |
| ltg | 1018 | 2979 | 5815 | 2190 | 2193 |
| lv | 118530 | 437086 | 1458341 | 1244609 | 1247181 |
| mad | 1113 | 3500 | 3762 | 1149 | 1157 |
| mai | 13285 | 22572 | 53246 | 38119 | 38128 |
| map_bms | 10875 | 16411 | 67964 | 51125 | 51137 |
| mdf | 4002 | 11043 | 21658 | 9178 | 9183 |
| mg | 92227 | 213580 | 328751 | 265931 | 267633 |
| mhr | 11010 | 33013 | 60771 | 38153 | 38220 |
| mi | 7274 | 10154 | 29052 | 24854 | 25216 |
| min | 223075 | 422381 | 1315030 | 513108 | 515548 |
| mk | 131522 | 695456 | 1984109 | 1639280 | 1640744 |
| ml | 84334 | 415940 | 797903 | 485482 | 486324 |
| mn | 23434 | 124485 | 295548 | 142014 | 142984 |
| mni | 10354 | 18872 | 29474 | 18810 | 19876 |
| mnw | 3136 | 34165 | 9342 | 1908 | 2387 |
| mr | 92464 | 326662 | 633452 | 383501 | 392709 |
| mrj | 10156 | 20132 | 48416 | 24098 | 24098 |
| ms | 344459 | 988647 | 2424535 | 1932685 | 1937647 |
| mt | 5381 | 49856 | 104636 | 51251 | 51278 |
| mwl | 4402 | 37271 | 127176 | 25729 | 26366 |
| my | 103938 | 334243 | 445026 | 300567 | 303288 |
| myv | 7515 | 21592 | 36762 | 26570 | 26591 |
| mzn | 17364 | 39937 | 89805 | 46962 | 47020 |
| nah | 5934 | 12478 | 30805 | 13093 | 14364 |
| nap | 11235 | 22336 | 41891 | 20798 | 20804 |
| nds | 79228 | 242004 | 583941 | 305374 | 305422 |
| nds_nl | 6484 | 28252 | 94875 | 51767 | 51785 |
| ne | 30359 | 91033 | 153937 | 124841 | 125078 |
| new | 71653 | 245033 | 454251 | 289444 | 289912 |
| nia | 1496 | 4047 | 4524 | 2258 | 2812 |
| nl | 1948842 | 5867108 | 17953497 | 16886996 | 16893078 |
| nn | 160106 | 549454 | 1751481 | 1375622 | 1376155 |
| no | 591000 | 2213493 | 7050421 | 6471776 | 6476157 |
| nov | 1341 | 3711 | 7466 | 3948 | 3955 |
| nqo | 1489 | 9858 | 23633 | 6056 | 6981 |
| nrm | 4571 | 14279 | 38935 | 33295 | 33321 |
| nso | 7618 | 9505 | 36826 | 35621 | 35623 |
| nv | 21911 | 57663 | 123762 | 107139 | 107139 |
| ny | 1060 | 3164 | 4750 | 1455 | 1490 |
| oc | 85099 | 303185 | 1035051 | 791403 | 792043 |
| olo | 4348 | 14334 | 18704 | 8634 | 8647 |
| om | 1710 | 7496 | 8222 | 4333 | 4416 |
| or | 17027 | 76677 | 137274 | 57023 | 57064 |
| os | 17468 | 40488 | 80943 | 48124 | 48414 |
| pa | 50421 | 226354 | 344239 | 197594 | 198080 |
| pag | 2533 | 41416 | 4150 | 2907 | 2907 |
| pam | 7816 | 16493 | 53785 | 29375 | 29715 |
| pap | 3153 | 12086 | 22157 | 18161 | 18233 |
| pcd | 5272 | 12203 | 15602 | 12319 | 12360 |
| pcm | 1019 | 4631 | 4161 | 1160 | 1261 |
| pdc | 2009 | 5406 | 8151 | 4122 | 4144 |
| pfl | 2717 | 14024 | 26150 | 10291 | 10294 |
| pi | 2972 | 5959 | 7773 | 201 | 201 |
| pih | 829 | 1065 | 2857 | 2016 | 2018 |
| pl | 1468194 | 5599437 | 19364191 | 18389560 | 18405120 |
| pms | 66552 | 170133 | 369956 | 308593 | 314917 |
| pnb | 67534 | 402101 | 937247 | 525105 | 533265 |
| pnt | 497 | 1467 | 3553 | 1715 | 1716 |
| ps | 19254 | 134868 | 72493 | 36348 | 36899 |
| pt | 1048823 | 5226543 | 16811382 | 15714686 | 15714890 |
| pwn | 328 | 1825 | 990 | 428 | 430 |
| qu | 22365 | 47078 | 133032 | 106686 | 106708 |
| rm | 3569 | 27345 | 47169 | 20460 | 20490 |
| rmy | 911 | 2221 | 4235 | 1854 | 1965 |
| rn | 726 | 1641 | 1436 | 594 | 601 |
| ro | 417630 | 1518438 | 4282072 | 3764830 | 3765626 |
| roa_rup | 1270 | 2751 | 4641 | 2527 | 2537 |
| roa_tara | 8407 | 18031 | 42040 | 14330 | 14331 |
| ru | 1889271 | 12344758 | 30796034 | 29268121 | 29288089 |
| rue | 7369 | 21429 | 61022 | 43241 | 43256 |
| rw | 7793 | 35619 | 38066 | 19821 | 20967 |
| sa | 12069 | 78188 | 104193 | 40307 | 41518 |
| sah | 16007 | 76450 | 82154 | 61041 | 61412 |
| sat | 8655 | 43624 | 57493 | 28497 | 28820 |
| sc | 6919 | 24434 | 66719 | 44707 | 44733 |
| scn | 21990 | 49686 | 132583 | 102735 | 102774 |
| sco | 34097 | 86464 | 301450 | 148184 | 148406 |
| sd | 16228 | 48679 | 79392 | 34572 | 35729 |
| se | 6101 | 10531 | 25844 | 17978 | 18010 |
| sg | 473 | 537 | 318 | 184 | 184 |
| sh | 445218 | 1213741 | 4337559 | 3858400 | 3860253 |
| shi | 1650 | 6036 | 10364 | 4715 | 4926 |
| shn | 10653 | 51542 | 46976 | 29925 | 29993 |
| si | 21959 | 132932 | 146935 | 55158 | 56422 |
| simple | 224811 | 618711 | 2014692 | 1689101 | 1689185 |
| sk | 230073 | 845501 | 2867955 | 2468707 | 2469129 |
| skr | 5505 | 62742 | 38412 | 15004 | 21015 |
| sl | 175804 | 810714 | 2597824 | 2067682 | 2068522 |
| sm | 995 | 1591 | 3838 | 2515 | 2523 |
| smn | 5004 | 12483 | 37008 | 22440 | 22492 |
| sn | 10159 | 19527 | 40437 | 31573 | 32763 |
| so | 8540 | 36173 | 53012 | 42913 | 43548 |
| sq | 94941 | 371562 | 699210 | 520709 | 522241 |
| sr | 657766 | 2331205 | 6562651 | 5257496 | 5264077 |
| srn | 1171 | 3050 | 6637 | 1752 | 1941 |
| ss | 783 | 2124 | 2382 | 1127 | 1139 |
| st | 982 | 1971 | 2510 | 1689 | 1701 |
| stq | 3648 | 10972 | 29713 | 15919 | 15920 |
| su | 57552 | 122590 | 496201 | 384518 | 384891 |
| sv | 2418380 | 5019466 | 22263222 | 21445193 | 21445441 |
| sw | 75109 | 218219 | 798980 | 688743 | 692052 |
| szl | 56229 | 109496 | 473528 | 129434 | 129479 |
| szy | 4628 | 49166 | 18867 | 2419 | 3187 |
| ta | 157642 | 780711 | 1642095 | 1141032 | 1142372 |
| tay | 2643 | 15831 | 10104 | 1496 | 5312 |
| tcy | 2135 | 9932 | 11073 | 4680 | 4745 |
| te | 83866 | 719826 | 822054 | 619184 | 622092 |
| tet | 1323 | 3797 | 8047 | 4093 | 4095 |
| tg | 108598 | 279635 | 761826 | 330974 | 331423 |
| th | 153075 | 715083 | 1723394 | 1395935 | 1398891 |
| ti | 388 | 987 | 1191 | 325 | 326 |
| tk | 4739 | 23629 | 18964 | 9717 | 9760 |
| tl | 43388 | 150141 | 447293 | 296084 | 296634 |
| tn | 1090 | 3960 | 3976 | 2008 | 2010 |
| to | 1512 | 2754 | 3542 | 2029 | 2080 |
| tpi | 1278 | 2055 | 3897 | 2193 | 2198 |
| tr | 500435 | 1806253 | 4476004 | 3964449 | 3965589 |
| trv | 1770 | 16650 | 3814 | 504 | 969 |
| ts | 674 | 1798 | 1557 | 903 | 909 |
| tt | 484761 | 1196573 | 2064576 | 1675637 | 1676579 |
| tum | 16778 | 31383 | 57382 | 28399 | 37107 |
| tw | 3568 | 16807 | 15312 | 10912 | 11495 |
| ty | 1175 | 1364 | 1563 | 1095 | 1095 |
| tyv | 3399 | 21968 | 21004 | 5535 | 5557 |
| udm | 5066 | 11432 | 24875 | 17709 | 17715 |
| ug | 8102 | 58982 | 23654 | 12671 | 12874 |
| uk | 522709 | 2867475 | 6800045 | 6445628 | 6451294 |
| ur | 194948 | 676227 | 1870488 | 910419 | 914840 |
| uz | 232879 | 859793 | 1344790 | 1073065 | 1084092 |
| ve | 764 | 1359 | 2524 | 2366 | 2366 |
| vec | 62729 | 98987 | 275972 | 194424 | 194447 |
| vep | 6853 | 43014 | 93864 | 39225 | 39228 |
| vi | 1300753 | 4103594 | 10852870 | 6884928 | 6892519 |
| vls | 7272 | 26374 | 61885 | 49639 | 49653 |
| vo | 32133 | 78015 | 125495 | 101612 | 101629 |
| wa | 11104 | 56305 | 116752 | 79686 | 80037 |
| war | 1158901 | 1342594 | 6654010 | 6009636 | 6009641 |
| wo | 1659 | 7693 | 10828 | 4057 | 4103 |
| wuu | 37170 | 58227 | 121928 | 82184 | 82237 |
| xal | 2008 | 4309 | 4582 | 2112 | 2113 |
| xh | 1502 | 4448 | 6733 | 2128 | 2186 |
| xmf | 19201 | 49944 | 179291 | 21189 | 22041 |
| yi | 14164 | 68937 | 172645 | 116102 | 116325 |
| yo | 29938 | 52231 | 85171 | 46928 | 47346 |
| za | 2388 | 3917 | 7463 | 4613 | 4665 |
| zea | 5445 | 16648 | 36161 | 23532 | 23578 |
| zh | 1310818 | 5501834 | 16397675 | 14380752 | 14421795 |
| zh_classical | 11775 | 44053 | 140340 | 71576 | 71692 |
| zh_min_nan | 425676 | 853753 | 2627115 | 2053956 | 2054838 |
| zh_yue | 121401 | 273459 | 844047 | 683130 | 683226 |
| zu | 10387 | 18211 | 22569 | 20193 | 20238 |
#### Validation
| | Articles | Paragraphs | Anchors | Anchors with QIDs | Anchors with PageIDs |
| :-- | --: | --: | --: | --: | --: |
| ab | 475 | 601 | 1061 | 399 | 399 |
| ace | 2443 | 2668 | 5197 | 2583 | 2587 |
| ady | 142 | 183 | 248 | 150 | 151 |
| af | 27383 | 44157 | 109108 | 100078 | 100123 |
| als | 11998 | 18277 | 44634 | 32874 | 32874 |
| alt | 481 | 827 | 1020 | 621 | 621 |
| am | 3746 | 5234 | 10111 | 5731 | 5756 |
| ami | 749 | 1431 | 744 | 179 | 304 |
| an | 10526 | 13588 | 74808 | 58195 | 58259 |
| ang | 826 | 1099 | 2647 | 1099 | 1102 |
| anp | 504 | 751 | 1698 | 437 | 581 |
| ar | 265368 | 401215 | 1295968 | 1249666 | 1250103 |
| arc | 377 | 418 | 1061 | 610 | 617 |
| ary | 1447 | 1870 | 5702 | 3885 | 3887 |
| arz | 367206 | 410487 | 876531 | 767742 | 768942 |
| as | 5463 | 8589 | 13953 | 7719 | 7732 |
| ast | 48345 | 97904 | 329690 | 197832 | 198042 |
| atj | 399 | 440 | 774 | 406 | 416 |
| av | 719 | 961 | 1918 | 1043 | 1053 |
| avk | 8056 | 9538 | 11816 | 3633 | 3772 |
| awa | 515 | 645 | 721 | 213 | 287 |
| ay | 1391 | 1653 | 2616 | 1481 | 1483 |
| az | 57070 | 88136 | 177151 | 155596 | 155858 |
| azb | 57642 | 64997 | 137053 | 83336 | 83778 |
| ba | 25690 | 43460 | 69052 | 61624 | 61666 |
| ban | 4053 | 4840 | 9581 | 7374 | 7385 |
| bar | 6905 | 9377 | 20546 | 12164 | 12164 |
| bat_smg | 4149 | 4706 | 8787 | 5820 | 5823 |
| bcl | 3355 | 5058 | 8759 | 5080 | 5083 |
| be | 64203 | 91174 | 276525 | 244114 | 244122 |
| bg | 98148 | 148234 | 438687 | 400356 | 401330 |
| bh | 1535 | 1891 | 3464 | 2630 | 2635 |
| bi | 154 | 159 | 251 | 151 | 151 |
| bjn | 1764 | 2166 | 6458 | 3694 | 3775 |
| blk | 887 | 1374 | 1538 | 821 | 839 |
| bm | 196 | 272 | 317 | 146 | 146 |
| bn | 50495 | 81841 | 169097 | 128508 | 128609 |
| bo | 2198 | 4079 | 934 | 746 | 752 |
| bpy | 10057 | 12879 | 18710 | 9693 | 9693 |
| br | 18687 | 23734 | 73278 | 59024 | 59056 |
| bs | 28533 | 42574 | 138483 | 107760 | 107846 |
| bug | 1636 | 1655 | 6141 | 1682 | 1731 |
| bxr | 754 | 1003 | 2930 | 1211 | 1211 |
| ca | 251952 | 399403 | 1265187 | 1140208 | 1140359 |
| cbk_zam | 460 | 932 | 1040 | 268 | 272 |
| cdo | 2953 | 3237 | 6938 | 3273 | 3281 |
| ce | 197899 | 234617 | 341843 | 166126 | 166206 |
| ceb | 1221405 | 1324624 | 4218179 | 3742385 | 3773844 |
| ch | 123 | 131 | 239 | 64 | 73 |
| chr | 124 | 134 | 175 | 100 | 100 |
| chy | 67 | 67 | 47 | 42 | 42 |
| ckb | 13511 | 18279 | 48490 | 25365 | 25540 |
| co | 1723 | 2587 | 5286 | 2729 | 2737 |
| cr | 22 | 23 | 22 | 13 | 13 |
| crh | 2978 | 3246 | 11005 | 7899 | 7899 |
| cs | 189136 | 297000 | 1101343 | 974485 | 974505 |
| csb | 1307 | 1533 | 3341 | 1851 | 1851 |
| cu | 250 | 275 | 540 | 229 | 229 |
| cv | 14374 | 17462 | 42486 | 19049 | 19114 |
| cy | 89897 | 110225 | 222476 | 177842 | 178698 |
| da | 87765 | 129990 | 482701 | 427333 | 427374 |
| dag | 2215 | 3237 | 4935 | 1169 | 1498 |
| de | 1120553 | 1788057 | 5831103 | 5607963 | 5607963 |
| din | 149 | 177 | 128 | 15 | 15 |
| diq | 6660 | 7883 | 17684 | 15853 | 15861 |
| dsb | 781 | 1032 | 2476 | 1301 | 1301 |
| dty | 554 | 659 | 861 | 480 | 483 |
| dv | 1227 | 1898 | 870 | 406 | 406 |
| dz | 215 | 303 | 21 | 8 | 8 |
| ee | 203 | 242 | 183 | 66 | 74 |
| el | 99725 | 169395 | 461747 | 344216 | 344456 |
| eml | 4387 | 6114 | 13938 | 8193 | 8214 |
| en | 2503257 | 4516442 | 12185882 | 11974436 | 11975194 |
| eo | 90949 | 123848 | 474727 | 442357 | 442772 |
| es | 701171 | 1209944 | 3752765 | 3514968 | 3522213 |
| et | 80911 | 123354 | 395877 | 319773 | 320587 |
| eu | 104388 | 156552 | 378553 | 337331 | 337944 |
| ext | 804 | 1045 | 2269 | 1344 | 1345 |
| fa | 191532 | 262121 | 688824 | 652200 | 653219 |
| fat | 446 | 709 | 214 | 3 | 97 |
| ff | 361 | 459 | 378 | 222 | 234 |
| fi | 123327 | 184244 | 576163 | 514419 | 514915 |
| fiu_vro | 1738 | 2263 | 4622 | 2623 | 2628 |
| fj | 168 | 213 | 604 | 214 | 214 |
| fo | 2625 | 3398 | 13383 | 10599 | 10617 |
| fr | 954388 | 1695419 | 4847588 | 4738268 | 4740047 |
| frp | 1018 | 1181 | 4089 | 2862 | 2862 |
| frr | 2968 | 3419 | 9609 | 7996 | 8011 |
| fur | 884 | 1168 | 3225 | 1833 | 1839 |
| fy | 15980 | 22974 | 139530 | 108300 | 108337 |
| ga | 10781 | 14493 | 38848 | 32343 | 32352 |
| gag | 440 | 551 | 961 | 465 | 465 |
| gan | 731 | 1045 | 2071 | 1536 | 1537 |
| gcr | 480 | 567 | 297 | 122 | 122 |
| gd | 4393 | 5296 | 15544 | 10458 | 10458 |
| gl | 62030 | 101112 | 407821 | 325854 | 325960 |
| glk | 1383 | 1747 | 3723 | 2435 | 2443 |
| gn | 1164 | 1728 | 4751 | 3521 | 3528 |
| gom | 2106 | 4116 | 1511 | 251 | 251 |
| gor | 2844 | 3082 | 11826 | 7315 | 7411 |
| got | 216 | 245 | 514 | 190 | 190 |
| gpe | 265 | 355 | 93 | 71 | 73 |
| gu | 8437 | 13008 | 50956 | 38242 | 38251 |
| guc | 198 | 279 | 312 | 141 | 162 |
| gur | 369 | 565 | 145 | 25 | 27 |
| guw | 332 | 393 | 827 | 313 | 616 |
| gv | 957 | 1324 | 5652 | 2252 | 2253 |
| ha | 10666 | 16571 | 12853 | 10862 | 10993 |
| hak | 1179 | 1302 | 4628 | 3155 | 3155 |
| haw | 541 | 650 | 1238 | 616 | 618 |
| he | 165541 | 295188 | 1213939 | 1153986 | 1155384 |
| hi | 36229 | 60184 | 108382 | 89102 | 89340 |
| hif | 2107 | 2369 | 5015 | 2648 | 2680 |
| hr | 62673 | 97103 | 354392 | 304964 | 305664 |
| hsb | 3599 | 4379 | 10001 | 7239 | 7240 |
| ht | 14693 | 17294 | 23011 | 18721 | 18928 |
| hu | 125438 | 206546 | 586091 | 523501 | 523814 |
| hy | 113060 | 171415 | 418503 | 298111 | 298292 |
| hyw | 5310 | 9207 | 17616 | 8842 | 9168 |
| ia | 4021 | 4850 | 14972 | 11257 | 11263 |
| id | 158648 | 237793 | 734148 | 627764 | 629525 |
| ie | 2213 | 2523 | 6750 | 5036 | 5046 |
| ig | 7944 | 12354 | 6464 | 3466 | 3493 |
| ik | 100 | 118 | 120 | 64 | 71 |
| ilo | 4096 | 8297 | 14183 | 8609 | 8609 |
| inh | 399 | 494 | 1298 | 626 | 645 |
| io | 8868 | 11368 | 33682 | 28744 | 28748 |
| is | 13573 | 18566 | 62576 | 47263 | 47360 |
| it | 584902 | 968880 | 3050620 | 2902006 | 2903047 |
| iu | 61 | 62 | 48 | 29 | 29 |
| ja | 573457 | 1032568 | 3222875 | 3083301 | 3088604 |
| jam | 249 | 274 | 623 | 399 | 399 |
| jbo | 270 | 321 | 562 | 56 | 56 |
| jv | 13108 | 16457 | 60143 | 42112 | 42148 |
| ka | 53071 | 76961 | 252383 | 46974 | 46975 |
| kaa | 775 | 1071 | 1476 | 669 | 717 |
| kab | 1269 | 1685 | 4050 | 2397 | 2403 |
| kbd | 474 | 663 | 1482 | 537 | 537 |
| kbp | 535 | 656 | 835 | 810 | 811 |
| kcg | 190 | 223 | 311 | 196 | 197 |
| kg | 187 | 213 | 420 | 260 | 260 |
| ki | 273 | 333 | 248 | 169 | 206 |
| kk | 76635 | 99268 | 204324 | 126732 | 127677 |
| kl | 97 | 129 | 162 | 43 | 43 |
| km | 3844 | 9340 | 12192 | 4524 | 4583 |
| kn | 14217 | 29387 | 48402 | 20992 | 21022 |
| ko | 154713 | 239887 | 689906 | 633527 | 634725 |
| koi | 682 | 1010 | 1815 | 1144 | 1144 |
| krc | 423 | 698 | 2022 | 841 | 846 |
| ks | 888 | 1006 | 1692 | 645 | 670 |
| ksh | 918 | 1156 | 2951 | 1053 | 1055 |
| ku | 10060 | 12771 | 29766 | 23050 | 23232 |
| kv | 1105 | 1456 | 3365 | 2787 | 2787 |
| kw | 1820 | 2171 | 5570 | 3076 | 3082 |
| ky | 16655 | 21571 | 31213 | 21712 | 21757 |
| la | 22397 | 26732 | 161732 | 142447 | 142486 |
| lad | 961 | 1286 | 3984 | 2056 | 2056 |
| lb | 15385 | 19667 | 60568 | 46664 | 46730 |
| lbe | 207 | 232 | 488 | 290 | 290 |
| lez | 1184 | 1764 | 3829 | 2760 | 2760 |
| lfn | 1455 | 2435 | 3328 | 1602 | 1604 |
| lg | 1272 | 2650 | 1795 | 239 | 305 |
| li | 4501 | 6650 | 24213 | 15790 | 15826 |
| lij | 1781 | 2607 | 6658 | 3933 | 3933 |
| lld | 17293 | 17539 | 64059 | 49327 | 50864 |
| lmo | 12641 | 14976 | 40217 | 29874 | 29946 |
| ln | 585 | 692 | 1321 | 996 | 997 |
| lo | 1144 | 1680 | 3023 | 991 | 1013 |
| lt | 62652 | 85962 | 300456 | 269264 | 270227 |
| ltg | 289 | 341 | 686 | 285 | 285 |
| lv | 34742 | 48371 | 160433 | 136594 | 136873 |
| mad | 284 | 381 | 439 | 135 | 136 |
| mai | 2184 | 2499 | 5878 | 4209 | 4212 |
| map_bms | 1539 | 1847 | 7486 | 5705 | 5705 |
| mdf | 1086 | 1244 | 2512 | 1077 | 1077 |
| mg | 20361 | 23650 | 36313 | 29821 | 29974 |
| mhr | 2863 | 3594 | 6538 | 4114 | 4122 |
| mi | 1078 | 1154 | 3214 | 2743 | 2776 |
| min | 42987 | 46277 | 143692 | 55809 | 56077 |
| mk | 46235 | 76890 | 219310 | 180884 | 181042 |
| ml | 31116 | 46345 | 88976 | 53726 | 53818 |
| mn | 8485 | 13887 | 32271 | 15330 | 15455 |
| mni | 1843 | 2102 | 3418 | 2183 | 2325 |
| mnw | 1284 | 3750 | 897 | 202 | 224 |
| mr | 26803 | 36202 | 70510 | 43103 | 44352 |
| mrj | 2062 | 2297 | 5627 | 2888 | 2888 |
| ms | 75473 | 110077 | 270064 | 215280 | 215811 |
| mt | 2516 | 5510 | 11680 | 5760 | 5761 |
| mwl | 1828 | 4316 | 15365 | 3216 | 3287 |
| my | 24005 | 37165 | 49321 | 33223 | 33518 |
| myv | 1732 | 2327 | 4094 | 2923 | 2925 |
| mzn | 3784 | 4409 | 9938 | 5199 | 5205 |
| nah | 1128 | 1314 | 3316 | 1418 | 1556 |
| nap | 2047 | 2473 | 4579 | 2249 | 2249 |
| nds | 20646 | 26845 | 65355 | 34090 | 34094 |
| nds_nl | 2127 | 3063 | 10188 | 5585 | 5587 |
| ne | 6956 | 10087 | 16847 | 13502 | 13536 |
| new | 22645 | 27233 | 50860 | 32165 | 32217 |
| nia | 312 | 430 | 512 | 277 | 329 |
| nl | 490380 | 651743 | 1994062 | 1874588 | 1875259 |
| nn | 44180 | 60918 | 194747 | 153072 | 153140 |
| no | 172653 | 245377 | 779775 | 715618 | 716153 |
| nov | 339 | 410 | 861 | 452 | 452 |
| nqo | 583 | 1037 | 2598 | 704 | 813 |
| nrm | 1318 | 1600 | 4276 | 3734 | 3736 |
| nso | 960 | 1038 | 4242 | 4119 | 4119 |
| nv | 5649 | 6281 | 13652 | 11768 | 11768 |
| ny | 236 | 318 | 392 | 126 | 126 |
| oc | 23067 | 33775 | 115155 | 87980 | 88063 |
| olo | 1273 | 1598 | 2162 | 997 | 998 |
| om | 401 | 830 | 891 | 401 | 412 |
| or | 6261 | 8669 | 16120 | 6752 | 6757 |
| os | 3923 | 4535 | 9130 | 5470 | 5524 |
| pa | 17242 | 24844 | 37813 | 21759 | 21812 |
| pag | 1602 | 4519 | 404 | 300 | 300 |
| pam | 1509 | 1831 | 6019 | 3230 | 3272 |
| pap | 773 | 1376 | 2526 | 2042 | 2056 |
| pcd | 1089 | 1361 | 1803 | 1334 | 1338 |
| pcm | 353 | 542 | 409 | 128 | 139 |
| pdc | 370 | 565 | 839 | 424 | 429 |
| pfl | 1113 | 1500 | 2861 | 1070 | 1070 |
| pi | 578 | 682 | 881 | 26 | 26 |
| pih | 118 | 125 | 317 | 217 | 218 |
| pl | 444095 | 621669 | 2149058 | 2041686 | 2043400 |
| pms | 16530 | 19186 | 41547 | 34783 | 35474 |
| pnb | 21586 | 44654 | 103992 | 58461 | 59380 |
| pnt | 147 | 172 | 389 | 177 | 178 |
| ps | 7566 | 14922 | 8427 | 4108 | 4187 |
| pt | 349931 | 580790 | 1868210 | 1745832 | 1745858 |
| pwn | 103 | 166 | 85 | 31 | 31 |
| qu | 4540 | 5211 | 14781 | 11746 | 11750 |
| rm | 1076 | 3100 | 5539 | 2293 | 2298 |
| rmy | 214 | 235 | 446 | 176 | 184 |
| rn | 125 | 172 | 124 | 53 | 53 |
| ro | 106169 | 168972 | 473512 | 416263 | 416347 |
| roa_rup | 214 | 290 | 458 | 254 | 254 |
| roa_tara | 1278 | 1979 | 4455 | 1534 | 1534 |
| ru | 806592 | 1369860 | 3416036 | 3245837 | 3247963 |
| rue | 2022 | 2513 | 7023 | 5064 | 5066 |
| rw | 2577 | 3925 | 4139 | 2223 | 2349 |
| sa | 4344 | 8607 | 11313 | 4249 | 4391 |
| sah | 4729 | 8472 | 9040 | 6623 | 6660 |
| sat | 3485 | 4960 | 6473 | 3225 | 3278 |
| sc | 1900 | 2807 | 7641 | 5096 | 5098 |
| scn | 4263 | 5604 | 14333 | 11167 | 11171 |
| sco | 7382 | 9639 | 33771 | 16432 | 16453 |
| sd | 3970 | 5499 | 8879 | 3804 | 3925 |
| se | 982 | 1149 | 2841 | 1958 | 1958 |
| sg | 67 | 72 | 36 | 24 | 24 |
| sh | 103283 | 135121 | 484459 | 429555 | 429770 |
| shi | 477 | 679 | 1144 | 545 | 570 |
| shn | 3633 | 5630 | 5456 | 3627 | 3639 |
| si | 7672 | 14760 | 16443 | 6215 | 6346 |
| simple | 52503 | 68765 | 224811 | 187586 | 187598 |
| sk | 67520 | 93957 | 317232 | 272711 | 272779 |
| skr | 2090 | 6926 | 4136 | 1683 | 2359 |
| sl | 55621 | 89740 | 285769 | 228421 | 228530 |
| sm | 153 | 171 | 485 | 297 | 297 |
| smn | 1163 | 1420 | 4517 | 2681 | 2688 |
| sn | 1896 | 2139 | 4351 | 3384 | 3529 |
| so | 2358 | 4032 | 6064 | 5027 | 5083 |
| sq | 25223 | 41621 | 79295 | 59156 | 59350 |
| sr | 177997 | 258455 | 728755 | 584663 | 585394 |
| srn | 281 | 342 | 796 | 205 | 225 |
| ss | 188 | 259 | 265 | 125 | 125 |
| st | 157 | 198 | 248 | 164 | 166 |
| stq | 804 | 1162 | 3150 | 1816 | 1816 |
| su | 10348 | 13687 | 55055 | 42915 | 42944 |
| sv | 467467 | 558522 | 2473790 | 2382576 | 2382608 |
| sw | 18014 | 24348 | 90302 | 77817 | 78145 |
| szl | 11292 | 12173 | 52459 | 14419 | 14424 |
| szy | 2391 | 5418 | 2042 | 235 | 285 |
| ta | 59923 | 87114 | 183399 | 126977 | 127148 |
| tay | 1192 | 1757 | 1101 | 175 | 591 |
| tcy | 769 | 1077 | 1089 | 464 | 465 |
| te | 43790 | 79667 | 91327 | 69148 | 69484 |
| tet | 294 | 412 | 871 | 471 | 471 |
| tg | 27060 | 31599 | 86180 | 37522 | 37561 |
| th | 49169 | 78814 | 189768 | 154097 | 154453 |
| ti | 87 | 99 | 89 | 22 | 22 |
| tk | 1328 | 2612 | 2116 | 1056 | 1062 |
| tl | 11731 | 16623 | 49726 | 32858 | 32914 |
| tn | 296 | 424 | 477 | 278 | 278 |
| to | 254 | 277 | 393 | 230 | 233 |
| tpi | 180 | 207 | 394 | 216 | 217 |
| tr | 134938 | 200972 | 496960 | 440639 | 440790 |
| trv | 807 | 1814 | 400 | 53 | 98 |
| ts | 155 | 203 | 219 | 132 | 132 |
| tt | 113689 | 132676 | 228544 | 185563 | 185662 |
| tum | 2188 | 3516 | 6442 | 3105 | 4083 |
| tw | 1249 | 1885 | 1729 | 1217 | 1291 |
| ty | 162 | 167 | 215 | 143 | 143 |
| tyv | 1494 | 2486 | 2342 | 611 | 617 |
| udm | 1036 | 1240 | 2781 | 1957 | 1957 |
| ug | 2629 | 6556 | 2657 | 1479 | 1493 |
| uk | 203057 | 318240 | 758049 | 718278 | 718908 |
| ur | 54784 | 75152 | 206169 | 99493 | 100041 |
| uz | 65767 | 95465 | 149763 | 119192 | 120519 |
| ve | 128 | 148 | 256 | 229 | 229 |
| vec | 9463 | 11242 | 32188 | 22525 | 22531 |
| vep | 3225 | 4804 | 10375 | 4295 | 4295 |
| vi | 330763 | 455933 | 1211343 | 768936 | 769829 |
| vls | 2189 | 2904 | 7133 | 5776 | 5777 |
| vo | 7308 | 8647 | 13902 | 11270 | 11273 |
| wa | 4457 | 6269 | 12736 | 8751 | 8794 |
| war | 146537 | 149236 | 738087 | 666983 | 666983 |
| wo | 516 | 864 | 1083 | 404 | 414 |
| wuu | 5530 | 6448 | 13732 | 9168 | 9171 |
| xal | 407 | 449 | 549 | 308 | 308 |
| xh | 399 | 550 | 804 | 284 | 293 |
| xmf | 4516 | 5414 | 19437 | 2342 | 2447 |
| yi | 5260 | 7563 | 18821 | 12493 | 12510 |
| yo | 4431 | 5855 | 9761 | 5361 | 5410 |
| za | 335 | 414 | 777 | 457 | 458 |
| zea | 1470 | 1847 | 3682 | 2569 | 2574 |
| zh | 389361 | 611537 | 1817382 | 1592929 | 1597686 |
| zh_classical | 3601 | 4995 | 15834 | 8157 | 8170 |
| zh_min_nan | 87849 | 94529 | 291330 | 227978 | 228083 |
| zh_yue | 23579 | 30146 | 92720 | 75081 | 75096 |
| zu | 1646 | 2050 | 2518 | 2228 | 2234 |
**NOTE:** The number of articles in the tables above refers to the number of articles that have at least one paragraph belonging to the article appear in the split.
## Additional Information
### Licensing Information
The WikiAnc dataset is given under the [Creative Commons Attribution ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license.
|
scan | 2023-06-01T14:59:55.000Z | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:bsd",
"multi-turn",
"arxiv:1711.00350",
"region:us"
] | null | SCAN tasks with various splits.
SCAN is a set of simple language-driven navigation tasks for studying
compositional learning and zero-shot generalization.
See https://github.com/brendenlake/SCAN for a description of the splits.
Example usage:
data = datasets.load_dataset('scan/length') | @inproceedings{Lake2018GeneralizationWS,
title={Generalization without Systematicity: On the Compositional Skills of
Sequence-to-Sequence Recurrent Networks},
author={Brenden M. Lake and Marco Baroni},
booktitle={ICML},
year={2018},
url={https://arxiv.org/pdf/1711.00350.pdf},
} | null | 2 | 1,860 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- bsd
multilinguality:
- monolingual
pretty_name: SCAN
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: scan
tags:
- multi-turn
dataset_info:
- config_name: simple
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3217770
num_examples: 16728
- name: test
num_bytes: 799912
num_examples: 4182
download_size: 4080388
dataset_size: 4017682
- config_name: addprim_jump
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2535625
num_examples: 14670
- name: test
num_bytes: 1508445
num_examples: 7706
download_size: 4111174
dataset_size: 4044070
- config_name: addprim_turn_left
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3908891
num_examples: 21890
- name: test
num_bytes: 170063
num_examples: 1208
download_size: 4148216
dataset_size: 4078954
- config_name: filler_num0
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2513034
num_examples: 15225
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 2892291
dataset_size: 2843121
- config_name: filler_num1
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2802865
num_examples: 16290
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3185317
dataset_size: 3132952
- config_name: filler_num2
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3106220
num_examples: 17391
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3491975
dataset_size: 3436307
- config_name: filler_num3
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3412704
num_examples: 18528
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3801870
dataset_size: 3742791
- config_name: length
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2672464
num_examples: 16990
- name: test
num_bytes: 1345218
num_examples: 3920
download_size: 4080388
dataset_size: 4017682
- config_name: template_around_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2513034
num_examples: 15225
- name: test
num_bytes: 1229757
num_examples: 4476
download_size: 3801870
dataset_size: 3742791
- config_name: template_jump_around_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3412704
num_examples: 18528
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3801870
dataset_size: 3742791
- config_name: template_opposite_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2944398
num_examples: 15225
- name: test
num_bytes: 857943
num_examples: 4476
download_size: 3861420
dataset_size: 3802341
- config_name: template_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3127623
num_examples: 15225
- name: test
num_bytes: 716403
num_examples: 4476
download_size: 3903105
dataset_size: 3844026
config_names:
- addprim_jump
- addprim_turn_left
- filler_num0
- filler_num1
- filler_num2
- filler_num3
- length
- simple
- template_around_right
- template_jump_around_right
- template_opposite_right
- template_right
---
# Dataset Card for "scan"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/brendenlake/SCAN](https://github.com/brendenlake/SCAN)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 224.18 MB
- **Size of the generated dataset:** 44.53 MB
- **Total amount of disk used:** 268.71 MB
### Dataset Summary
SCAN tasks with various splits.
SCAN is a set of simple language-driven navigation tasks for studying
compositional learning and zero-shot generalization.
See https://github.com/brendenlake/SCAN for a description of the splits.
Example usage:
data = datasets.load_dataset('scan/length')
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### addprim_jump
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 4.05 MB
- **Total amount of disk used:** 22.73 MB
An example of 'train' looks as follows.
```
```
#### addprim_turn_left
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 4.09 MB
- **Total amount of disk used:** 22.76 MB
An example of 'train' looks as follows.
```
```
#### filler_num0
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 2.85 MB
- **Total amount of disk used:** 21.53 MB
An example of 'train' looks as follows.
```
```
#### filler_num1
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 3.14 MB
- **Total amount of disk used:** 21.82 MB
An example of 'train' looks as follows.
```
```
#### filler_num2
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 3.44 MB
- **Total amount of disk used:** 22.12 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### addprim_jump
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### addprim_turn_left
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num0
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num1
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num2
- `commands`: a `string` feature.
- `actions`: a `string` feature.
### Data Splits
| name |train|test|
|-----------------|----:|---:|
|addprim_jump |14670|7706|
|addprim_turn_left|21890|1208|
|filler_num0 |15225|1173|
|filler_num1 |16290|1173|
|filler_num2 |17391|1173|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{Lake2018GeneralizationWS,
title={Generalization without Systematicity: On the Compositional Skills of
Sequence-to-Sequence Recurrent Networks},
author={Brenden M. Lake and Marco Baroni},
booktitle={ICML},
year={2018},
url={https://arxiv.org/pdf/1711.00350.pdf},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
mteb/sts16-sts | 2022-09-27T19:12:09.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 1,857 | ---
language:
- en
--- |
m3hrdadfi/recipe_nlg_lite | 2021-07-03T09:34:56.000Z | [
"region:us"
] | m3hrdadfi | RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation - Lite version
The dataset we publish contains 7,198 cooking recipes (>7K).
It's processed in more careful way and provides more samples than any other dataset in the area. | @misc{RecipeNLGLite,
author = {Mehrdad Farahani},
title = {RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation (Lite)},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {url{https://github.com/m3hrdadfi/recipe-nlg-lite}},
} | null | 2 | 1,850 | # RecipeNLG: A Cooking Recipes Dataset
RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation - Lite version
The dataset contains `7,198` cooking recipes (`>7K`).
It's processed in more careful way and provides more samples than any other dataset in the area.
## How to use
```bash
pip install git+https://github.com/huggingface/datasets.git
```
Load `m3hrdadfi/recipe_nlg_lite` dataset using `load_dataset`:
```python
from datasets import load_dataset
dataset = load_dataset("m3hrdadfi/recipe_nlg_lite")
print(dataset)
```
Output:
```text
DatasetDict({
train: Dataset({
features: ['uid', 'name', 'description', 'link', 'ner', 'ingredients', 'steps'],
num_rows: 6118
})
test: Dataset({
features: ['uid', 'name', 'description', 'link', 'ner', 'ingredients', 'steps'],
num_rows: 1080
})
})
```
## Examples
```json
{
"description": "we all know how satisfying it is to make great pork tenderloin, ribs, or a roast but the end of the meal creates a new quandary what do you do with the leftover pork contrary to what you might think, it's not that difficult . how to repurpose your meal is where real cooking creativity comes into play, so let us present to you our favorite pork chop soup recipe . with this recipe, you'll discover how the natural bold flavor of pork gives this hearty soup a lift that a vegetable soup or chicken noodle soup just can't get . it's a dinner recipe to warm you up on a cold winter night or a midday restorative for a long work week . throw all the ingredients in a large pot and let it simmer on the stove for a couple hours, or turn it into a slow cooker recipe and let it percolate for an afternoon . this foolproof recipe transforms your favorite comfort food into an easy meal to warm you up again and again . the health benefits of pork pork is a great option if you're on a low carb diet or trying to up your protein intake . the protein percentage of leaner cuts of pork can be as high as 89 percent pork also provides valuable vitamins and minerals that make pork recipes worthy endeavors . pork has high levels of thiamin and niacin, which other types of meat like beef and lamb lack . they are both b vitamins that aid in several body functions such as metabolism and cell function . pork also delivers a healthy amount of zinc, which aids in brain and immune system function . that makes digging into this pork chop noodle soup all the more alluring . recipe variations this pork soup recipe can be adapted to many diets . if you're following a low carb or ketogenic diet, you can modify the recipe to suit you by leaving out the noodles . if you like, you can add a little crunch by topping it with french fried onions . for cheese lovers, a sprinkle of parmesan cheese can give the soup more body and extra umami flavors . if you're not a noodle lover, this soup recipe works equally well as a potato soup with diced potatoes . if you want to make a southwestern or mexican version, add a can of diced tomatoes and bell peppers for a little extra depth . if you have a penchant for spicy soups, add a little chili powder or red pepper flakes . it's up to you this recipe is great for using up leftover pork chops, but you can make this soup using fresh chops however you decide to do it, you won't be disappointed.",
"ingredients": "3.0 bone in pork chops, salt, pepper, 2.0 tablespoon vegetable oil, 2.0 cup chicken broth, 4.0 cup vegetable broth, 1.0 red onion, 4.0 carrots, 2.0 clove garlic, 1.0 teaspoon dried thyme, 0.5 teaspoon dried basil, 1.0 cup rotini pasta, 2.0 stalk celery",
"link": "https://www.yummly.com/private/recipe/Pork-Chop-Noodle-Soup-2249011?layout=prep-steps",
"name": "pork chop noodle soup",
"ner": "bone in pork chops, salt, pepper, vegetable oil, chicken broth, vegetable broth, red onion, carrots, garlic, dried thyme, dried basil, rotini pasta, celery",
"steps": "season pork chops with salt and pepper . heat oil in a dutch oven over medium high heat . add chops and cook for about 4 minutes, until golden brown . flip and cook 4 minutes more, until golden brown . transfer chops to a plate and set aside . pour half of chicken broth into pot, scraping all browned bits from bottom . add remaining chicken broth, vegetable broth, onion, carrots, celery and garlic . mix well and bring to a simmer . add 1 quart water, thyme, basil, 2 teaspoons salt and 1 teaspoon pepper . mix well and bring to a simmer . add chops back to pot and return to simmer . reduce heat and simmer for 90 minutes, stirring occasionally, being careful not to break up chops . transfer chops to plate, trying not to break them up . set aside to cool . raise the heat and bring the soup to a boil . add pasta and cook for about 12 minutes, until tender . when the chops are cool, pull them apart, discarding all the bones and fat . add the meat back to soup and stir well . taste for salt and pepper, and add if needed, before serving.",
"uid": "dab8b7d0-e0f6-4bb0-aed9-346e80dace1f"
}
```
## Citation
```bibtex
@misc{RecipeNLGLite,
author = {Mehrdad Farahani},
title = {RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation (Lite)},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {url{https://github.com/m3hrdadfi/recipe-nlg-lite}},
}
```
|
pie/conll2003 | 2022-05-06T16:14:31.000Z | [
"region:us"
] | pie | null | null | null | 0 | 1,846 | Entry not found |
iwslt2017 | 2023-04-05T10:07:51.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:fr",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:ro",
"language:zh",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | null | 13 | 1,833 | ---
annotations_creators:
- crowdsourced
language:
- ar
- de
- en
- fr
- it
- ja
- ko
- nl
- ro
- zh
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2017
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2017
dataset_info:
- config_name: iwslt2017-en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-it-en
features:
- name: translation
dtype:
translation:
languages:
- it
- en
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-nl-en
features:
- name: translation
dtype:
translation:
languages:
- nl
- en
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-nl-it
features:
- name: translation
dtype:
translation:
languages:
- nl
- it
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ro-en
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-ro-it
features:
- name: translation
dtype:
translation:
languages:
- ro
- it
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-ro-nl
features:
- name: translation
dtype:
translation:
languages:
- ro
- nl
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 27748780
dataset_size: 58736561
- config_name: iwslt2017-de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758320
dataset_size: 44427829
- config_name: iwslt2017-en-ar
features:
- name: translation
dtype:
translation:
languages:
- en
- ar
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 29333173
dataset_size: 58736561
- config_name: iwslt2017-en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758334
dataset_size: 44427829
- config_name: iwslt2017-en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 27699724
dataset_size: 51248330
- config_name: iwslt2017-en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26983602
dataset_size: 50222118
- config_name: iwslt2017-en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364776
dataset_size: 53767131
- config_name: iwslt2017-en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 27597071
dataset_size: 46079068
- config_name: iwslt2017-fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 26880731
dataset_size: 51248330
- config_name: iwslt2017-ja-en
features:
- name: translation
dtype:
translation:
languages:
- ja
- en
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26190859
dataset_size: 50222118
- config_name: iwslt2017-ko-en
features:
- name: translation
dtype:
translation:
languages:
- ko
- en
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364733
dataset_size: 53767131
- config_name: iwslt2017-zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 26849290
dataset_size: 46079068
---
# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.24 GB
- **Size of the generated dataset:** 1.14 GB
- **Total amount of disk used:** 5.38 GB
### Dataset Summary
The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system
across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional
bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### iwslt2017-ar-en
- **Size of downloaded dataset files:** 27.75 MB
- **Size of the generated dataset:** 58.74 MB
- **Total amount of disk used:** 86.49 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"ar\": \"لقد طرت في \\\"القوات الجوية \\\" لمدة ثمان سنوات. والآن أجد نفسي مضطرا لخلع حذائي قبل صعود الطائرة!\", \"en\": \"I flew on Air ..."
}
```
#### iwslt2017-de-en
- **Size of downloaded dataset files:** 16.76 MB
- **Size of the generated dataset:** 44.43 MB
- **Total amount of disk used:** 61.18 MB
An example of 'train' looks as follows.
```
{
"translation": {
"de": "Es ist mir wirklich eine Ehre, zweimal auf dieser Bühne stehen zu dürfen. Tausend Dank dafür.",
"en": "And it's truly a great honor to have the opportunity to come to this stage twice; I'm extremely grateful."
}
}
```
#### iwslt2017-en-ar
- **Size of downloaded dataset files:** 29.33 MB
- **Size of the generated dataset:** 58.74 MB
- **Total amount of disk used:** 88.07 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"ar\": \"لقد طرت في \\\"القوات الجوية \\\" لمدة ثمان سنوات. والآن أجد نفسي مضطرا لخلع حذائي قبل صعود الطائرة!\", \"en\": \"I flew on Air ..."
}
```
#### iwslt2017-en-de
- **Size of downloaded dataset files:** 16.76 MB
- **Size of the generated dataset:** 44.43 MB
- **Total amount of disk used:** 61.18 MB
An example of 'validation' looks as follows.
```
{
"translation": {
"de": "Die nächste Folie, die ich Ihnen zeige, ist eine Zeitrafferaufnahme was in den letzten 25 Jahren passiert ist.",
"en": "The next slide I show you will be a rapid fast-forward of what's happened over the last 25 years."
}
}
```
#### iwslt2017-en-fr
- **Size of downloaded dataset files:** 27.69 MB
- **Size of the generated dataset:** 51.24 MB
- **Total amount of disk used:** 78.94 MB
An example of 'validation' looks as follows.
```
{
"translation": {
"en": "But this understates the seriousness of this particular problem because it doesn't show the thickness of the ice.",
"fr": "Mais ceci tend à amoindrir le problème parce qu'on ne voit pas l'épaisseur de la glace."
}
}
```
### Data Fields
The data fields are the same among all splits.
#### iwslt2017-ar-en
- `translation`: a multilingual `string` variable, with possible languages including `ar`, `en`.
#### iwslt2017-de-en
- `translation`: a multilingual `string` variable, with possible languages including `de`, `en`.
#### iwslt2017-en-ar
- `translation`: a multilingual `string` variable, with possible languages including `en`, `ar`.
#### iwslt2017-en-de
- `translation`: a multilingual `string` variable, with possible languages including `en`, `de`.
#### iwslt2017-en-fr
- `translation`: a multilingual `string` variable, with possible languages including `en`, `fr`.
### Data Splits
| name |train |validation|test|
|---------------|-----:|---------:|---:|
|iwslt2017-ar-en|231713| 888|8583|
|iwslt2017-de-en|206112| 888|8079|
|iwslt2017-en-ar|231713| 888|8583|
|iwslt2017-en-de|206112| 888|8079|
|iwslt2017-en-fr|232825| 890|8597|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons BY-NC-ND
See the (TED Talks Usage Policy)[https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy].
### Citation Information
```
@inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@Narsil](https://github.com/Narsil) for adding this dataset. |
huggingface-course/codeparrot-ds-valid | 2021-09-13T14:24:27.000Z | [
"region:us"
] | huggingface-course | null | null | null | 2 | 1,829 | Entry not found |
allenai/scirepeval | 2023-08-25T20:52:45.000Z | [
"region:us"
] | allenai | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2021}
} | null | 9 | 1,823 | ---
dataset_info:
- config_name: fos
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: labels
sequence: int32
- name: labels_text
sequence: string
splits:
- name: evaluation
num_bytes: 63854253
num_examples: 68147
- name: train
num_bytes: 509154623
num_examples: 541218
- name: validation
num_bytes: 63947785
num_examples: 67631
download_size: 683428084
dataset_size: 636956661
- config_name: mesh_descriptors
features:
- name: doc_id
dtype: string
- name: mag_id
dtype: uint64
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: descriptor
dtype: string
- name: qualifier
dtype: string
splits:
- name: evaluation
num_bytes: 390178523
num_examples: 258678
- name: train
num_bytes: 3120117992
num_examples: 2069065
- name: validation
num_bytes: 390161743
num_examples: 258678
download_size: 4132614464
dataset_size: 3900458258
- config_name: cite_count
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: venue
dtype: string
- name: n_citations
dtype: int32
- name: log_citations
dtype: float32
splits:
- name: evaluation
num_bytes: 45741032
num_examples: 30058
- name: train
num_bytes: 265390284
num_examples: 175944
- name: validation
num_bytes: 40997159
num_examples: 26830
download_size: 378454118
dataset_size: 352128475
- config_name: pub_year
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: year
dtype: int32
- name: venue
dtype: string
- name: norm_year
dtype: float32
- name: scaled_year
dtype: float32
- name: n_authors
dtype: int32
- name: norm_authors
dtype: float32
splits:
- name: evaluation
num_bytes: 46195045
num_examples: 30000
- name: train
num_bytes: 301313882
num_examples: 198995
- name: validation
num_bytes: 30493617
num_examples: 19869
download_size: 411086891
dataset_size: 378002544
- config_name: cite_prediction
features:
- name: query
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: sha
dtype: string
- name: corpus_id
dtype: uint64
- name: pos
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: sha
dtype: string
- name: corpus_id
dtype: uint64
- name: neg
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: sha
dtype: string
- name: corpus_id
dtype: uint64
splits:
- name: train
num_bytes: 2582594392
num_examples: 676150
- name: validation
num_bytes: 549599739
num_examples: 143686
download_size: 3287219740
dataset_size: 3132194131
- config_name: cite_prediction_new
features:
- name: query
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: pos
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: neg
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: int8
splits:
- name: train
num_bytes: 23829782726
num_examples: 6197963
- name: validation
num_bytes: 609822308
num_examples: 176430
download_size: 25842249246
dataset_size: 24439605034
- config_name: cite_prediction_aug2023refresh
features:
- name: query
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: pos
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: neg
struct:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
splits:
- name: train
num_bytes: 2069439948
num_examples: 475656
download_size: 2147428459
dataset_size: 2069439948
- config_name: high_influence_cite
features:
- name: query
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 85746699
num_examples: 1199
- name: train
num_bytes: 2607643584
num_examples: 58626
- name: validation
num_bytes: 329589399
num_examples: 7356
download_size: 3149789722
dataset_size: 3022979682
- config_name: same_author
features:
- name: dataset
dtype: string
- name: query
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 126843751
num_examples: 13585
- name: train
num_bytes: 602167355
num_examples: 67493
- name: validation
num_bytes: 84426970
num_examples: 8996
download_size: 866210529
dataset_size: 813438076
- config_name: search
features:
- name: query
dtype: string
- name: doc_id
dtype: string
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: venue
dtype: string
- name: year
dtype: float64
- name: author_names
sequence: string
- name: n_citations
dtype: int32
- name: n_key_citations
dtype: int32
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 39417912
num_examples: 2637
- name: train
num_bytes: 6889691036
num_examples: 399878
- name: validation
num_bytes: 1096150259
num_examples: 67363
download_size: 9645282078
dataset_size: 8025259207
- config_name: biomimicry
features:
- name: doc_id
dtype: string
- name: doi
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: label
dtype: uint32
- name: venue
dtype: string
splits:
- name: evaluation
num_bytes: 16651415
num_examples: 10991
download_size: 17437012
dataset_size: 16651415
- config_name: drsm
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: label_type
dtype: string
- name: label
dtype: string
- name: class
dtype: uint32
splits:
- name: evaluation
num_bytes: 12756487
num_examples: 8813
download_size: 13449713
dataset_size: 12756487
- config_name: feeds_1
features:
- name: query
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: feed_id
dtype: string
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 6488182
num_examples: 423
download_size: 6911928
dataset_size: 6488182
- config_name: feeds_m
features:
- name: query
struct:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: feed_id
dtype: string
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 135219457
num_examples: 9025
download_size: 149126628
dataset_size: 135219457
- config_name: feeds_title
features:
- name: query
dtype: string
- name: doc_id
dtype: string
- name: feed_id
dtype: string
- name: abbreviations
dtype: string
- name: candidates
list:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
- name: score
dtype: uint32
splits:
- name: evaluation
num_bytes: 5923757
num_examples: 424
download_size: 6228046
dataset_size: 5923757
- config_name: peer_review_score_hIndex
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: rating
sequence: int32
- name: confidence
dtype: string
- name: authors
sequence: string
- name: decision
dtype: string
- name: mean_rating
dtype: float32
- name: hIndex
sequence: string
splits:
- name: evaluation
num_bytes: 18233728
num_examples: 12668
download_size: 19647506
dataset_size: 18233728
- config_name: trec_covid
features:
- name: query
dtype: string
- name: doc_id
dtype: string
- name: candidates
list:
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: string
- name: doc_id
dtype: string
- name: date
dtype: string
- name: doi
dtype: string
- name: iteration
dtype: string
- name: score
dtype: int32
splits:
- name: evaluation
num_bytes: 98757931
num_examples: 50
download_size: 104449690
dataset_size: 98757931
- config_name: tweet_mentions
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: index
dtype: int32
- name: retweets
dtype: float32
- name: count
dtype: int32
- name: mentions
dtype: float32
splits:
- name: evaluation
num_bytes: 25895172
num_examples: 25655
download_size: 28533162
dataset_size: 25895172
- config_name: scidocs_mag_mesh
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: authors
sequence: string
- name: cited_by
sequence: string
- name: references
sequence: string
- name: year
dtype: int32
splits:
- name: evaluation
num_bytes: 74027498
num_examples: 48473
download_size: 109426986
dataset_size: 74027498
- config_name: scidocs_view_cite_read
features:
- name: doc_id
dtype: string
- name: corpus_id
dtype: uint64
- name: title
dtype: string
- name: abstract
dtype: string
- name: authors
sequence: string
- name: cited_by
sequence: string
- name: references
sequence: string
- name: year
dtype: int32
splits:
- name: evaluation
num_bytes: 240557104
num_examples: 142009
download_size: 949184683
dataset_size: 240557104
- config_name: paper_reviewer_matching
features:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: corpus_id
dtype: uint64
splits:
- name: evaluation
num_bytes: 76005931
num_examples: 73364
download_size: 88124286
dataset_size: 76005931
---
|
nielsr/ade20k-panoptic-demo | 2022-11-06T17:13:22.000Z | [
"region:us"
] | nielsr | null | null | null | 0 | 1,801 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: segments_info
list:
- name: area
dtype: int64
- name: bbox
sequence: int64
- name: category_id
dtype: int64
- name: id
dtype: int64
- name: iscrowd
dtype: int64
splits:
- name: train
num_bytes: 492746.0
num_examples: 10
- name: validation
num_bytes: 461402.0
num_examples: 10
download_size: 949392
dataset_size: 954148.0
---
# Dataset Card for "ade20k-panoptic-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/TextCaps | 2022-12-09T01:38:32.000Z | [
"license:cc-by-4.0",
"region:us"
] | HuggingFaceM4 | extCaps requires models to read and reason about text in images to generate captions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it and visual content in the image to generate image descriptions.
Current state-of-the-art models fail to generate captions for images in TextCaps because they do not have text reading and reasoning capabilities. See the examples in the image to compare ground truth answers and corresponding predictions by a state-of-the-art model. | @article{sidorov2019textcaps,
title={TextCaps: a Dataset for Image Captioningwith Reading Comprehension},
author={Sidorov, Oleksii and Hu, Ronghang and Rohrbach, Marcus and Singh, Amanpreet},
journal={arXiv preprint arXiv:2003.12462},
year={2020}
} | null | 0 | 1,797 | ---
license: cc-by-4.0
---
|
sem_eval_2018_task_1 | 2022-11-18T21:45:06.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:en",
"language:es",
"license:unknown",
"emotion-classification",
"region:us"
] | null | SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification.
This is a dataset for multilabel emotion classification for tweets.
'Given a tweet, classify it as 'neutral or no emotion' or as one, or more, of eleven given emotions that best represent the mental state of the tweeter.'
It contains 22467 tweets in three languages manually annotated by crowdworkers using Best–Worst Scaling. | @InProceedings{SemEval2018Task1,
author = {Mohammad, Saif M. and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
title = {SemEval-2018 {T}ask 1: {A}ffect in Tweets},
booktitle = {Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)},
address = {New Orleans, LA, USA},
year = {2018}} | null | 9 | 1,796 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ar
- en
- es
license:
- unknown
multilinguality:
- multilingual
pretty_name: 'SemEval-2018 Task 1: Affect in Tweets'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
tags:
- emotion-classification
dataset_info:
- config_name: subtask5.english
features:
- name: ID
dtype: string
- name: Tweet
dtype: string
- name: anger
dtype: bool
- name: anticipation
dtype: bool
- name: disgust
dtype: bool
- name: fear
dtype: bool
- name: joy
dtype: bool
- name: love
dtype: bool
- name: optimism
dtype: bool
- name: pessimism
dtype: bool
- name: sadness
dtype: bool
- name: surprise
dtype: bool
- name: trust
dtype: bool
splits:
- name: train
num_bytes: 809768
num_examples: 6838
- name: test
num_bytes: 384519
num_examples: 3259
- name: validation
num_bytes: 104660
num_examples: 886
download_size: 5975590
dataset_size: 1298947
- config_name: subtask5.spanish
features:
- name: ID
dtype: string
- name: Tweet
dtype: string
- name: anger
dtype: bool
- name: anticipation
dtype: bool
- name: disgust
dtype: bool
- name: fear
dtype: bool
- name: joy
dtype: bool
- name: love
dtype: bool
- name: optimism
dtype: bool
- name: pessimism
dtype: bool
- name: sadness
dtype: bool
- name: surprise
dtype: bool
- name: trust
dtype: bool
splits:
- name: train
num_bytes: 362549
num_examples: 3561
- name: test
num_bytes: 288692
num_examples: 2854
- name: validation
num_bytes: 67259
num_examples: 679
download_size: 5975590
dataset_size: 718500
- config_name: subtask5.arabic
features:
- name: ID
dtype: string
- name: Tweet
dtype: string
- name: anger
dtype: bool
- name: anticipation
dtype: bool
- name: disgust
dtype: bool
- name: fear
dtype: bool
- name: joy
dtype: bool
- name: love
dtype: bool
- name: optimism
dtype: bool
- name: pessimism
dtype: bool
- name: sadness
dtype: bool
- name: surprise
dtype: bool
- name: trust
dtype: bool
splits:
- name: train
num_bytes: 414458
num_examples: 2278
- name: test
num_bytes: 278715
num_examples: 1518
- name: validation
num_bytes: 105452
num_examples: 585
download_size: 5975590
dataset_size: 798625
---
# Dataset Card for SemEval-2018 Task 1: Affect in Tweets
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://competitions.codalab.org/competitions/17751
- **Repository:**
- **Paper:** http://saifmohammad.com/WebDocs/semeval2018-task1.pdf
- **Leaderboard:**
- **Point of Contact:** https://www.saifmohammad.com/
### Dataset Summary
Tasks: We present an array of tasks where systems have to automatically determine the intensity of emotions (E) and intensity of sentiment (aka valence V) of the tweeters from their tweets. (The term tweeter refers to the person who has posted the tweet.) We also include a multi-label emotion classification task for tweets. For each task, we provide separate training and test datasets for English, Arabic, and Spanish tweets. The individual tasks are described below:
1. EI-reg (an emotion intensity regression task): Given a tweet and an emotion E, determine the intensity of E that best represents the mental state of the tweeter—a real-valued score between 0 (least E) and 1 (most E).
Separate datasets are provided for anger, fear, joy, and sadness.
2. EI-oc (an emotion intensity ordinal classification task): Given a tweet and an emotion E, classify the tweet into one of four ordinal classes of intensity of E that best represents the mental state of the tweeter.
Separate datasets are provided for anger, fear, joy, and sadness.
3. V-reg (a sentiment intensity regression task): Given a tweet, determine the intensity of sentiment or valence (V) that best represents the mental state of the tweeter—a real-valued score between 0 (most negative) and 1 (most positive).
4. V-oc (a sentiment analysis, ordinal classification, task): Given a tweet, classify it into one of seven ordinal classes, corresponding to various levels of positive and negative sentiment intensity, that best represents the mental state of the tweeter.
5. E-c (an emotion classification task): Given a tweet, classify it as 'neutral or no emotion' or as one, or more, of eleven given emotions that best represent the mental state of the tweeter.
Here, E refers to emotion, EI refers to emotion intensity, V refers to valence or sentiment intensity, reg refers to regression, oc refers to ordinal classification, c refers to classification.
Together, these tasks encompass various emotion and sentiment analysis tasks. You are free to participate in any number of tasks and on any of the datasets.
**Currently only the subtask 5 (E-c) is available on the Hugging Face Dataset Hub.**
### Supported Tasks and Leaderboards
### Languages
English, Arabic and Spanish
## Dataset Structure
### Data Instances
An example from the `subtask5.english` config is:
```
{'ID': '2017-En-21441',
'Tweet': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry",
'anger': False,
'anticipation': True,
'disgust': False,
'fear': False,
'joy': False,
'love': False,
'optimism': True,
'pessimism': False,
'sadness': False,
'surprise': False,
'trust': True}
```
### Data Fields
For any config of the subtask 5:
- ID: string id of the tweet
- Tweet: text content of the tweet as a string
- anger: boolean, True if anger represents the mental state of the tweeter
- anticipation: boolean, True if anticipation represents the mental state of the tweeter
- disgust: boolean, True if disgust represents the mental state of the tweeter
- fear: boolean, True if fear represents the mental state of the tweeter
- joy: boolean, True if joy represents the mental state of the tweeter
- love: boolean, True if love represents the mental state of the tweeter
- optimism: boolean, True if optimism represents the mental state of the tweeter
- pessimism: boolean, True if pessimism represents the mental state of the tweeter
- sadness: boolean, True if sadness represents the mental state of the tweeter
- surprise: boolean, True if surprise represents the mental state of the tweeter
- trust: boolean, True if trust represents the mental state of the tweeter
Note that the test set has no labels, and therefore all labels are set to False.
### Data Splits
| | train | validation | test |
|---------|------:|-----------:|------:|
| English | 6,838 | 886 | 3,259 |
| Arabic | 2,278 | 585 | 1,518 |
| Spanish | 3,561 | 679 | 2,854 |
## Dataset Creation
### Curation Rationale
### Source Data
Tweets
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Twitter users.
### Annotations
#### Annotation process
We presented one tweet at a time to the annotators
and asked which of the following options best de-
scribed the emotional state of the tweeter:
– anger (also includes annoyance, rage)
– anticipation (also includes interest, vigilance)
– disgust (also includes disinterest, dislike, loathing)
– fear (also includes apprehension, anxiety, terror)
– joy (also includes serenity, ecstasy)
– love (also includes affection)
– optimism (also includes hopefulness, confidence)
– pessimism (also includes cynicism, no confidence)
– sadness (also includes pensiveness, grief)
– surprise (also includes distraction, amazement)
– trust (also includes acceptance, liking, admiration)
– neutral or no emotion
Example tweets were provided in advance with ex-
amples of suitable responses.
On the Figure Eight task settings, we specified
that we needed annotations from seven people for
each tweet. However, because of the way the gold
tweets were set up, they were annotated by more
than seven people. The median number of anno-
tations was still seven. In total, 303 people anno-
tated between 10 and 4,670 tweets each. A total of
174,356 responses were obtained.
Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1–17. https://doi.org/10.18653/v1/S18-1001
#### Who are the annotators?
Crowdworkers on Figure Eight.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Saif M. Mohammad, Felipe Bravo-Marquez, Mohammad Salameh and Svetlana Kiritchenko
### Licensing Information
See the official [Terms and Conditions](https://competitions.codalab.org/competitions/17751#learn_the_details-terms_and_conditions)
### Citation Information
@InProceedings{SemEval2018Task1,
author = {Mohammad, Saif M. and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
title = {SemEval-2018 {T}ask 1: {A}ffect in Tweets},
booktitle = {Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)},
address = {New Orleans, LA, USA},
year = {2018}}
### Contributions
Thanks to [@maxpel](https://github.com/maxpel) for adding this dataset. |
huggingface-course/codeparrot-ds-train | 2021-09-13T14:33:48.000Z | [
"region:us"
] | huggingface-course | null | null | null | 4 | 1,796 | Entry not found |
cc_news | 2023-06-12T06:42:15.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | CC-News containing news articles from news sites all over the world The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has 708241 articles. It represents a small portion of English language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to collect and extract English language portion of CC-News. | @InProceedings{Hamborg2017,
author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},
title = {news-please: A Generic News Crawler and Extractor},
year = {2017},
booktitle = {Proceedings of the 15th International Symposium of Information Science},
location = {Berlin},
doi = {10.5281/zenodo.4120316},
pages = {218--223},
month = {March}
} | null | 37 | 1,792 | ---
pretty_name: CC-News
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: cc-news
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: domain
dtype: string
- name: date
dtype: string
- name: description
dtype: string
- name: url
dtype: string
- name: image_url
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 2016418133
num_examples: 708241
download_size: 845131146
dataset_size: 2016418133
---
# Dataset Card for CC-News
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CC-News homepage](https://commoncrawl.org/2016/10/news-dataset-available/)
- **Point of Contact:** [Vladimir Blagojevic](mailto:dovlex@gmail.com)
### Dataset Summary
CC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/.
This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news.
It contains 708241 English language news articles published between Jan 2017 and December 2019.
It represents a small portion of the English language subset of the CC-News dataset.
### Supported Tasks and Leaderboards
CC-News has been mostly used for language model training.
### Languages
The text in the dataset is in the English language.
## Dataset Structure
### Data Instances
Dataset instance contains an article itself and the relevant article fields.
An example from the Cc-New train set looks as follows:
```
{
'date': '2017-08-14 00:00:00',
'description': '"The spirit of Green Day has always been about rising above oppression."',
'domain': '1041jackfm.cbslocal.com',
'image_url': 'https://cbs1041jackfm.files.wordpress.com/2017/08/billie-joe-armstrong-theo-wargo-getty-images.jpg?w=946',
'text': 'By Abby Hassler\nGreen Day’s Billie Joe Armstrong has always been outspoken about his political beliefs. Following
the tragedy in Charlottesville, Virgina, over the weekend, Armstrong felt the need to speak out against the white supremacists
who caused much of the violence.\nRelated: Billie Joe Armstrong Wins #TBT with Childhood Studio Photo\n“My heart feels heavy.
I feel like what happened in Charlottesville goes beyond the point of anger,” Armstrong wrote on Facebook. “It makes me sad
and desperate. shocked. I f—— hate racism more than anything.”\n“The spirit of Green Day has always been about rising above
oppression. and sticking up for what you believe in and singing it at the top of your lungs,” Armstrong continued.
“We grew up fearing nuclear holocaust because of the cold war. those days are feeling way too relevant these days.
these issues are our ugly past.. and now it’s coming to haunt us. always resist these doomsday politicians. and in the
words of our punk forefathers .. Nazi punks f— off.”',
'title': 'Green Day’s Billie Joe Armstrong Rails Against White Nationalists',
'url': 'http://1041jackfm.cbslocal.com/2017/08/14/billie-joe-armstrong-white-nationalists/'
}
```
### Data Fields
- `date`: date of publication
- `description`: description or a summary of the article
- `domain`: source domain of the article (i.e. www.nytimes.com)
- `image_url`: URL of the article's image
- `text`: the actual article text in raw form
- `title`: title of the article
- `url`: article URL, the original URL where it was scraped.
### Data Splits
CC-News dataset has only the training set, i.e. it has to be loaded with `train` split specified:
`cc_news = load_dataset('cc_news', split="train")`
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
CC-News dataset has been proposed, created, and maintained by Sebastian Nagel.
The data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/.
This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an
integrated web crawler and information extractor for news.
It contains 708241 English language news articles published between Jan 2017 and December 2019.
Although news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable.
To strictly isolate English language articles an additional check has been performed using
[Spacy langdetect pipeline](https://spacy.io/universe/project/spacy-langdetect).
We selected articles with text fields scores of 80% probability or more of being English.
There are no strict guarantees that each article has all the relevant fields. For example, 527595
articles have a valid description field. All articles have what appears to be a valid image URL,
but they have not been verified.
#### Who are the source language producers?
The news websites throughout the World.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help language model researchers develop better language models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{Hamborg2017,
author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},
title = {news-please: A Generic News Crawler and Extractor},
year = {2017},
booktitle = {Proceedings of the 15th International Symposium of Information Science},
location = {Berlin},
doi = {10.5281/zenodo.4120316},
pages = {218--223},
month = {March}
}
```
### Contributions
Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset. |
neural_code_search | 2023-06-01T14:59:50.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:1908.09804",
"region:us"
] | null | Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs and a search corpus consisting of code snippets collected from the most popular Android repositories on GitHub. | @InProceedings{huggingface:dataset,
title = {Neural Code Search Evaluation Dataset},
authors = {Hongyu Li, Seohyun Kim and Satish Chandra},
journal = {arXiv e-prints},
year = 2018,
eid = {arXiv:1908.09804 [cs.SE]},
pages = {arXiv:1908.09804 [cs.SE]},
archivePrefix = {arXiv},
eprint = {1908.09804},
} | null | 6 | 1,791 | ---
pretty_name: Neural Code Search
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: neural-code-search-evaluation-dataset
dataset_info:
- config_name: evaluation_dataset
features:
- name: stackoverflow_id
dtype: int32
- name: question
dtype: string
- name: question_url
dtype: string
- name: question_author
dtype: string
- name: question_author_url
dtype: string
- name: answer
dtype: string
- name: answer_url
dtype: string
- name: answer_author
dtype: string
- name: answer_author_url
dtype: string
- name: examples
sequence: int32
- name: examples_url
sequence: string
splits:
- name: train
num_bytes: 296848
num_examples: 287
download_size: 383625
dataset_size: 296848
- config_name: search_corpus
features:
- name: id
dtype: int32
- name: filepath
dtype: string
- name: method_name
dtype: string
- name: start_line
dtype: int32
- name: end_line
dtype: int32
- name: url
dtype: string
splits:
- name: train
num_bytes: 1452630278
num_examples: 4716814
download_size: 121112543
dataset_size: 1452630278
config_names:
- evaluation_dataset
- search_corpus
---
# Dataset Card for Neural Code Search
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
[facebookresearch
/
Neural-Code-Search-Evaluation-Dataset](https://github.com/facebookresearch/Neural-Code-Search-Evaluation-Dataset/tree/master/data)
- **Repository:**
[Github](https://github.com/facebookresearch/Neural-Code-Search-Evaluation-Dataset.git)
- **Paper:**
[arXiv](https://arxiv.org/pdf/1908.09804.pdf)
### Dataset Summary
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs, with the hope that future work in this area can use this dataset as a common benchmark. We also provide the results of two code search models (NCS, UNIF) from recent work.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
EN - English
## Dataset Structure
### Data Instances
#### Search Corpus
The search corpus is indexed using all method bodies parsed from the 24,549 GitHub repositories. In total, there are 4,716,814 methods in this corpus. The code search model will find relevant code snippets (i.e. method bodies) from this corpus given a natural language query. In this data release, we will provide the following information for each method in the corpus:
#### Evaluation Dataset
The evaluation dataset is composed of 287 Stack Overflow question and answer pairs
### Data Fields
#### Search Corpus
- id: Each method in the corpus has a unique numeric identifier. This ID number will also be referenced in our evaluation dataset.
- filepath: The file path is in the format of :owner/:repo/relative-file-path-to-the-repo
method_name
- start_line: Starting line number of the method in the file.
- end_line: Ending line number of the method in the file.
- url: GitHub link to the method body with commit ID and line numbers encoded.
#### Evaluation Dataset
- stackoverflow_id: Stack Overflow post ID.
- question: Title fo the Stack Overflow post.
- question_url: URL of the Stack Overflow post.
- answer: Code snippet answer to the question.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The most popular Android repositories on GitHub (ranked by the number of stars) is used to create the search corpus. For each repository that we indexed, we provide the link, specific to the commit that was used.5 In total, there are 24,549 repositories.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
Hongyu Li, Seohyun Kim and Satish Chandra
### Licensing Information
CC-BY-NC 4.0 (Attr Non-Commercial Inter.)
### Citation Information
arXiv:1908.09804 [cs.SE]
### Contributions
Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset. |
lama | 2023-06-01T14:59:53.000Z | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:extended|conceptnet5",
"source_datasets:extended|squad",
"language:en",
"license:cc-by-4.0",
"probing",
"region:us"
] | null | LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA. | @inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
} | null | 7 | 1,767 | ---
pretty_name: 'LAMA: LAnguage Model Analysis'
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- extended|conceptnet5
- extended|squad
task_categories:
- text-retrieval
- text-classification
task_ids:
- fact-checking-retrieval
- text-scoring
paperswithcode_id: lama
tags:
- probing
dataset_info:
- config_name: trex
features:
- name: uuid
dtype: string
- name: obj_uri
dtype: string
- name: obj_label
dtype: string
- name: sub_uri
dtype: string
- name: sub_label
dtype: string
- name: predicate_id
dtype: string
- name: sub_surface
dtype: string
- name: obj_surface
dtype: string
- name: masked_sentence
dtype: string
- name: template
dtype: string
- name: template_negated
dtype: string
- name: label
dtype: string
- name: description
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 656913189
num_examples: 1304391
download_size: 74652201
dataset_size: 656913189
- config_name: squad
features:
- name: id
dtype: string
- name: sub_label
dtype: string
- name: obj_label
dtype: string
- name: negated
dtype: string
- name: masked_sentence
dtype: string
splits:
- name: train
num_bytes: 57188
num_examples: 305
download_size: 74639115
dataset_size: 57188
- config_name: google_re
features:
- name: pred
dtype: string
- name: sub
dtype: string
- name: obj
dtype: string
- name: evidences
dtype: string
- name: judgments
dtype: string
- name: sub_w
dtype: string
- name: sub_label
dtype: string
- name: sub_aliases
dtype: string
- name: obj_w
dtype: string
- name: obj_label
dtype: string
- name: obj_aliases
dtype: string
- name: uuid
dtype: string
- name: masked_sentence
dtype: string
- name: template
dtype: string
- name: template_negated
dtype: string
splits:
- name: train
num_bytes: 7638657
num_examples: 6106
download_size: 74639115
dataset_size: 7638657
- config_name: conceptnet
features:
- name: uuid
dtype: string
- name: sub
dtype: string
- name: obj
dtype: string
- name: pred
dtype: string
- name: obj_label
dtype: string
- name: masked_sentence
dtype: string
- name: negated
dtype: string
splits:
- name: train
num_bytes: 4130000
num_examples: 29774
download_size: 74639115
dataset_size: 4130000
config_names:
- conceptnet
- google_re
- squad
- trex
---
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. The dataset include a subset
of Google_RE
(https://code.google.com/archive/p/relation-extraction-corpus/), TRex
(subset of wikidata triples), Conceptnet
(https://github.com/commonsense/conceptnet5/wiki) and Squad. There are
configs for each of "google_re", "trex", "conceptnet" and "squad",
respectively.
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version of the dataset includes "negated" sentences as well as
the masked sentence. Also, certain of the config includes "template"
and "template_negated" fields of the form "[X] some text [Y]", where
[X] and [Y] are the subject and object slots respectively of certain
relations.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'description': 'the item (an institution, law, public office ...) or statement belongs to or has power over or applies to the value (a territorial jurisdiction: a country, state, municipality, ...)', 'label': 'applies to jurisdiction', 'masked_sentence': 'It is known as a principality as it is a monarchy headed by two Co-Princes – the Spanish/Roman Catholic Bishop of Urgell and the President of [MASK].', 'obj_label': 'France', 'obj_surface': 'France', 'obj_uri': 'Q142', 'predicate_id': 'P1001', 'sub_label': 'president of the French Republic', 'sub_surface': 'President', 'sub_uri': 'Q191954', 'template': '[X] is a legal term in [Y] .', 'template_negated': '[X] is not a legal term in [Y] .', 'type': 'N-M', 'uuid': '3fe3d4da-9df9-45ba-8109-784ce5fba38a'}
``
The conceptnet config has the following fields:
``
{'masked_sentence': 'One of the things you do when you are alive is [MASK].', 'negated': '', 'obj': 'think', 'obj_label': 'think', 'pred': 'HasSubevent', 'sub': 'alive', 'uuid': 'd4f11631dde8a43beda613ec845ff7d1'}
``
The squad config has the following fields:
``
{'id': '56be4db0acb8001400a502f0_0', 'masked_sentence': 'To emphasize the 50th anniversary of the Super Bowl the [MASK] color was used.', 'negated': "['To emphasize the 50th anniversary of the Super Bowl the [MASK] color was not used.']", 'obj_label': 'gold', 'sub_label': 'Squad'}
``
The google_re config has the following fields:
``
{'evidences': '[{\'url\': \'http://en.wikipedia.org/wiki/Peter_F._Martin\', \'snippet\': "Peter F. Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives. He has represented the 75th District Newport since 6 January 2009. He is currently serves on the House Committees on Judiciary, Municipal Government, and Veteran\'s Affairs. During his first term of office he served on the House Committees on Small Business and Separation of Powers & Government Oversight. In August 2010, Representative Martin was appointed as a Commissioner on the Atlantic States Marine Fisheries Commission", \'considered_sentences\': [\'Peter F Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives .\']}]', 'judgments': "[{'rater': '18349444711114572460', 'judgment': 'yes'}, {'rater': '17595829233063766365', 'judgment': 'yes'}, {'rater': '4593294093459651288', 'judgment': 'yes'}, {'rater': '7387074196865291426', 'judgment': 'yes'}, {'rater': '17154471385681223613', 'judgment': 'yes'}]", 'masked_sentence': 'Peter F Martin (born [MASK]) is an American politician who is a Democratic member of the Rhode Island House of Representatives .', 'obj': '1941', 'obj_aliases': '[]', 'obj_label': '1941', 'obj_w': 'None', 'pred': '/people/person/date_of_birth', 'sub': '/m/09gb0bw', 'sub_aliases': '[]', 'sub_label': 'Peter F. Martin', 'sub_w': 'None', 'template': '[X] (born [Y]).', 'template_negated': '[X] (not born [Y]).', 'uuid': '18af2dac-21d3-4c42-aff5-c247f245e203'}
``
### Data Fields
The trex config has the following fields:
* uuid: the id
* obj_uri: a uri for the object slot
* obj_label: a label for the object slot
* sub_uri: a uri for the subject slot
* sub_label: a label for the subject slot
* predicate_id: the predicate/relationship
* sub_surface: the surface text for the subject
* obj_surface: The surface text for the object. This is the word that should be predicted by the [MASK] token.
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively. template may be missing and replaced with an empty string.
* template_negated: Same as above, except the [Y] is not the object. template_negated may be missing and replaced with empty strings.
* label: the label for the relationship/predicate. label may be missing and replaced with an empty string.
* description': a description of the relationship/predicate. description may be missing and replaced with an empty string.
* type: a type id for the relationship/predicate. type may be missing and replaced with an empty string.
The conceptnet config has the following fields:
* uuid: the id
* sub: the subject. subj may be missing and replaced with an empty string.
* obj: the object to be predicted. obj may be missing and replaced with an empty string.
* pred: the predicate/relationship
* obj_label: the object label
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
The squad config has the following fields:
* id: the id
* sub_label: the subject label
* obj_label: the object label that is being predicted
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
The google_re config has the following fields:
* uuid: the id
* pred: the predicate
* sub: the subject. subj may be missing and replaced with an empty string.
* obj: the object. obj may be missing and replaced with an empty string.
* evidences: flattened json string that provides evidence for predicate. parse this json string to get more 'snippet' information.
* judgments: data about judgments
* sub_q: unknown
* sub_label: label for the subject
* sub_aliases: unknown
* obj_w: unknown
* obj_label: label for the object
* obj_aliases: unknown
* masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
* template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively.
* template_negated: Same as above, except the [Y] is not the object.
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
nlphuji/flickr30k | 2023-01-19T17:40:41.000Z | [
"region:us"
] | nlphuji | null | null | null | 11 | 1,766 | # Flickr30k
Original paper: [From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions](https://aclanthology.org/Q14-1006)
Homepage: https://shannon.cs.illinois.edu/DenotationGraph/
Bibtex:
```
@article{young2014image,
title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions},
author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia},
journal={Transactions of the Association for Computational Linguistics},
volume={2},
pages={67--78},
year={2014},
publisher={MIT Press}
}
``` |
center-for-humans-and-machines/style-diffusion | 2023-06-30T17:45:02.000Z | [
"region:us"
] | center-for-humans-and-machines | null | null | null | 0 | 1,766 | ---
dataset_info:
features:
- name: vectorId
dtype: string
- name: medianYear
dtype: int32
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 3448928
num_examples: 1113
download_size: 0
dataset_size: 3448928
---
# Dataset Card for "style-diffusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mteb/stsbenchmark-sts | 2022-09-27T19:11:21.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 4 | 1,757 | ---
language:
- en
--- |
fusing/fill50k | 2023-03-10T22:36:46.000Z | [
"region:us"
] | fusing | null | null | null | 12 | 1,757 | Entry not found |
kde4 | 2022-11-03T16:32:20.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:af",
"language:ar",
"language:as",
"language:ast",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:crh",
"language:cs",
"language:csb",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hsb",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:lb",
"language:lt",
"language:lv",
"language:mai",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:mt",
"language:nb",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:nso",
"language:oc",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:rw",
"language:se",
"language:si",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:uz",
"language:vi",
"language:wa",
"language:xh",
"language:zh",
"license:unknown",
"region:us"
] | null | A parallel corpus of KDE4 localization files (v.2).
92 languages, 4,099 bitexts
total number of files: 75,535
total number of tokens: 60.75M
total number of sentence fragments: 8.89M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 11 | 1,756 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- as
- ast
- be
- bg
- bn
- br
- ca
- crh
- cs
- csb
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- ha
- he
- hi
- hne
- hr
- hsb
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- lb
- lt
- lv
- mai
- mk
- ml
- mr
- ms
- mt
- nb
- nds
- ne
- nl
- nn
- nso
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- rw
- se
- si
- sk
- sl
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- uz
- vi
- wa
- xh
- zh
language_bcp47:
- bn-IN
- en-GB
- pt-BR
- zh-CN
- zh-HK
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: KDE4
dataset_info:
- config_name: fi-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 8845933
num_examples: 101593
download_size: 2471355
dataset_size: 8845933
- config_name: it-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 8827049
num_examples: 109003
download_size: 2389051
dataset_size: 8827049
- config_name: nl-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 22294586
num_examples: 188454
download_size: 6203460
dataset_size: 22294586
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 27132585
num_examples: 220566
download_size: 7622662
dataset_size: 27132585
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 25650409
num_examples: 210173
download_size: 7049364
dataset_size: 25650409
---
# Dataset Card for KDE4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/KDE4.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/KDE4.php
E.g.
`dataset = load_dataset("kde4", lang1="en", lang2="nl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
laion/laion2B-en-aesthetic | 2023-01-18T20:03:33.000Z | [
"region:us"
] | laion | null | null | null | 22 | 1,753 | details at https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md |
baber/hendrycks_math | 2023-08-25T21:15:56.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:2103.03874",
"region:us"
] | baber | MATH is a dataset of 12,500 challenging competition mathematics problems. Each
problem in Math has a full step-by-step solution which can be used to teach
models to generate answer derivations and explanations. | @article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the Math Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
} | null | 0 | 1,753 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: MATH
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math/blob/main/README.md
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/abs/2103.03874
### Dataset Summary
MATH contains 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanation
This dataset card aims to be a base template for new datasets.
### Languages
[English]
## Dataset Structure
### Data Instances
7 sub-datasets
### Data Splits
training: 7500
test: 5000
## Additional Information
### Licensing Information
MIT but check the [Legal Compliance](https://arxiv.org/pdf/2103.03874.pdf) section in appendix B of the paper as well as the [repo](https://github.com/hendrycks/math/blob/main/LICENSE).
### Citation Information
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}
|
cardiffnlp/tweet_sentiment_multilingual | 2022-11-30T14:01:25.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-tweet-datasets",
"language:en",
"language:ar",
"language:fr",
"language:de",
"language:hi",
"language:it",
"language:pt",
"language:es",
"region:us"
] | cardiffnlp | null | @inproceedings{barbieri-etal-2022-xlm,
title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond",
author = "Barbieri, Francesco and
Espinosa Anke, Luis and
Camacho-Collados, Jose",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.27",
pages = "258--266",
abstract = "Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.",
} | null | 10 | 1,735 | ---
language:
- en
- ar
- fr
- de
- hi
- it
- pt
- es
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: tweet_sentiment_multilingual
pretty_name: Tweet Sentiment Multilingual
train-eval-index:
- config: sentiment
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
configs:
- arabic
- english
- french
- german
- hindi
- italian
- portuguese
- spanish
dataset_info:
- config_name: sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
---
# Dataset Card for cardiffnlp/tweet_sentiment_multilingual
## Dataset Description
- **Homepage:** [https://github.com/cardiffnlp/xlm-t](https://github.com/cardiffnlp/xlm-t)
- **Repository:** - **Homepage:** [https://github.com/cardiffnlp/xlm-t](https://github.com/cardiffnlp/xlm-t)
- **Paper:** [https://aclanthology.org/2022.lrec-1.27/](https://aclanthology.org/2022.lrec-1.27/)
- **Point of Contact:** [Asahi Ushio](https://asahiushio.com/)
### Dataset Summary
Tweet Sentiment Multilingual consists of sentiment analysis dataset on Twitter in 8 different lagnuages.
- arabic
- english
- french
- german
- hindi
- italian
- portuguese
- spanish
### Supported Tasks and Leaderboards
- `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
## Dataset Structure
### Data Instances
An instance from `sentiment` config:
```
{'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
```
### Data Fields
For `sentiment` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: negative
`1`: neutral
`2`: positive
### Data Splits
- arabic
- english
- french
- german
- hindi
- italian
- portuguese
- spanish
| name | train | validation | test |
| --------------- | ----- | ---------- | ----- |
| arabic | 1838 | 323 | 869 |
| english | 1838 | 323 | 869 |
| french | 1838 | 323 | 869 |
| german | 1838 | 323 | 869 |
| hindi | 1838 | 323 | 869 |
| italian | 1838 | 323 | 869 |
| portuguese | 1838 | 323 | 869 |
| spanish | 1838 | 323 | 869 |
### Dataset Curators
Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
### Licensing Information
[Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ), and all of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
### Citation Information
```
@inproceedings{barbieri-etal-2022-xlm,
title = "{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond",
author = "Barbieri, Francesco and
Espinosa Anke, Luis and
Camacho-Collados, Jose",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.27",
pages = "258--266",
abstract = "Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.",
}
```
|
yizhongw/self_instruct | 2023-03-07T10:07:36.000Z | [
"license:apache-2.0",
"arxiv:2212.10560",
"arxiv:2204.07705",
"region:us"
] | yizhongw | Self-Instruct is a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. | @misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
} | null | 161 | 1,726 | ---
license: apache-2.0
dataset_info:
- config_name: self_instruct
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 20527462
num_examples: 82612
download_size: 24113858
dataset_size: 20527462
- config_name: human_eval
features:
- name: id
dtype: string
- name: motivation_app
dtype: string
- name: instruction
dtype: string
- name: instances
sequence:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 151244
num_examples: 252
download_size: 170193
dataset_size: 151244
- config_name: super_natural_instructions
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 40352923
num_examples: 50000
- name: test
num_bytes: 9713953
num_examples: 11810
download_size: 52975509
dataset_size: 50066876
- config_name: prompt_source
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 57368889
num_examples: 52657
download_size: 60126945
dataset_size: 57368889
- config_name: p3
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 57368889
num_examples: 52657
download_size: 60126945
dataset_size: 57368889
---
# Dataset Card for Self Instruct
## Table of Contents
- [Dataset Card for Self Instruct](#dataset-card-for-self-instruct)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [self\_instruct](#self_instruct)
- [super\_natural\_instructions](#super_natural_instructions)
- [p3](#p3)
- [human\_eval](#human_eval)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [self\_instruct](#self_instruct-1)
- [super\_natural\_instructions](#super_natural_instructions-1)
- [p3](#p3-1)
- [human\_eval](#human_eval-1)
- [Data Fields](#data-fields)
- [self\_instruct](#self_instruct-2)
- [super\_natural\_instructions](#super_natural_instructions-2)
- [p3](#p3-2)
- [human\_eval](#human_eval-2)
- [Data Splits](#data-splits)
- [self\_instruct](#self_instruct-3)
- [super\_natural\_instructions](#super_natural_instructions-3)
- [p3](#p3-3)
- [human\_eval](#human_eval-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/yizhongw/self-instruct
- **Paper:** https://arxiv.org/abs/2212.10560
- **Leaderboard:**
- **Point of Contact:** Yizhong Wang
### Dataset Summary
Self-Instruct is a framework that helps language models improve their ability to follow natural language instructions. It does this by using the model's own generations to create a large collection of instructional data. With Self-Instruct, it is possible to improve the instruction-following capabilities of language models without relying on extensive manual annotation.
A part of this framework, the Self-Instruct authors released a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors also released a new set of 252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the Self Instruct paper](https://arxiv.org/abs/2212.10560).
To enable comparison on public datasets, Self-Instruct also contains 50k examples from the P3 and Super Natural Instructions datasets.
### Supported Tasks and Leaderboards
The datasets in Self-Instruct are designed for _instruction training_ pretrained language models. The following subsets are provided as part of Self Instruct.
#### self_instruct
82k prompts and model completions generated via OpenAI's `davinci` engine.
#### super_natural_instructions
50k expert written instructions and demonstrations sampled from the [Super Natural Instructions dataset](https://arxiv.org/abs/2204.07705)
#### p3
50k crowd-sourced instructions and demonstrations sampled from the [Public Pool of Prompts (P3) dataset](https://huggingface.co/datasets/bigscience/P3)
#### human_eval
252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the self-instruct paper](https://arxiv.org/abs/2212.10560). Please refer to [the human evaluation information]([human_eval/README.md](https://github.com/yizhongw/self-instruct/blob/main/human_eval/README.md)) for more details.
### Languages
The data in Self Instruct are in English (BCP-47 en).
## Dataset Structure
### Data Instances
#### self_instruct
An example pf "train" looks as follows:
```json
{
"prompt": "Explain the origin of life on earth. Output:",
"completion": " Life on Earth is believed to have begun about 3.5 billion years ago, when the planet was still in its early stages of formation. The earliest evidence for life on Earth are fossils of microorganisms (bacteria and archaea) that lived approximately 3.5 billion years ago. These organisms were capable of converting carbon dioxide into organic compounds using sunlight as an energy source. This process is known as photosynthesis.",
}
```
#### super_natural_instructions
An example pf "train" looks as follows:
```json
{
"prompt": "Write a correct answer for the question. You should reason about prototypical situations and provide the correct answer. Avoid answers that provide incomplete justification for the question.\n\nInput: name something people often buy on credit.\n\nOutput:",
"completion": " appliances",
}
```
#### p3
An example pf "train" looks as follows:
```json
{
"prompt": "Determine if the following two sentences paraphrase each other or not.\nSent 1: By adding sufficient oxygen to compensate for the metabolic usage , rebreathing the carbon dioxide and removing the gas , most of the volume is conserved .\nSent 2: By adding sufficient oxygen to compensate for the metabolic consumption , removing the carbon dioxide and reinhaling the gas , most of the volume is conserved .\n",
"completion": "No",
}
```
#### human_eval
An example pf "train" looks as follows:
```json
{
"id": "user_oriented_task_136",
"motivation_app": "Goodreads",
"instruction": "Choose the best books from the given genre.",
"instances": {
"input": ["Crime & Mystery"],
"output": [
"1- The Girl with the Dragon Tattoo\n2- And Then There Were None\n3- Angels & Demons\n4- Rebecca\n5- In Cold Blood\n6- The Godfather\n7- The Lovely Bones\n8- Gone Girl\n9- The Name of the Rose\n10- Shutter Island"
],
},
}
```
### Data Fields
The data fields for each configuration are as follows.
#### self_instruct
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### super_natural_instructions
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### p3
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### human_eval
* `id`: The ID associated with the labelling task
* `motivation_app`: The application associated with the task
* `instruction`: The instruction written by the human labeler.
* `instances.input`: The input that forms part of the complete instruction
* `instances.output`: The human written demonstration
### Data Splits
#### self_instruct
| | train |
|---------------|------:|
| self_instruct | 82612 |
#### super_natural_instructions
| | train | test |
|----------------------------|------:|------:|
| super_natural_instructions | 50000 | 11810 |
#### p3
| | train |
|----|------:|
| p3 | 52657 |
#### human_eval
| | train |
|------------|------:|
| human_eval | 252 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `self_instruct` data is generated by a language model (GPT-3) and inevitably contains some errors or biases. The authors analyzed the data quality on 200 random instructions in our paper, and found that 46% of the data points may have problems. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
``` |
schema_guided_dstc8 | 2023-01-25T14:43:36.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:multi-class-classification",
"task_ids:parsing",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1909.05855",
"arxiv:2002.01359",
"region:us"
] | null | The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8).
The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant.
These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather.
For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces,
which reflects common real-world scenarios. | @inproceedings{aaai/RastogiZSGK20,
author = {Abhinav Rastogi and
Xiaoxue Zang and
Srinivas Sunkara and
Raghav Gupta and
Pranav Khaitan},
title = {Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided
Dialogue Dataset},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8689--8696},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6394}
} | null | 7 | 1,719 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- dialogue-modeling
- multi-class-classification
- parsing
paperswithcode_id: sgd
pretty_name: Schema-Guided Dialogue
dataset_info:
- config_name: dialogues
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: slots
sequence:
- name: slot
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slot_values
sequence:
- name: slot_name
dtype: string
- name: slot_value_list
sequence: string
- name: actions
sequence:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: canonical_values
sequence: string
- name: values
sequence: string
- name: service_results
sequence:
- name: service_results_list
sequence:
- name: service_slot_name
dtype: string
- name: service_canonical_value
dtype: string
- name: service_call
struct:
- name: method
dtype: string
- name: parameters
sequence:
- name: parameter_slot_name
dtype: string
- name: parameter_canonical_value
dtype: string
splits:
- name: train
num_bytes: 158452984
num_examples: 16142
- name: validation
num_bytes: 23553544
num_examples: 2482
- name: test
num_bytes: 41342956
num_examples: 4201
download_size: 617805368
dataset_size: 223349484
- config_name: schema
features:
- name: service_name
dtype: string
- name: description
dtype: string
- name: slots
sequence:
- name: name
dtype: string
- name: description
dtype: string
- name: is_categorical
dtype: bool
- name: possible_values
sequence: string
- name: intents
sequence:
- name: name
dtype: string
- name: description
dtype: string
- name: is_transactional
dtype: bool
- name: required_slots
sequence: string
- name: optional_slots
sequence:
- name: slot_name
dtype: string
- name: slot_value
dtype: string
- name: result_slots
sequence: string
splits:
- name: train
num_bytes: 31513
num_examples: 26
- name: validation
num_bytes: 18798
num_examples: 17
- name: test
num_bytes: 22487
num_examples: 21
download_size: 617805368
dataset_size: 72798
---
# Dataset Card for The Schema-Guided Dialogue Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github repository for The Schema-Guided Dialogue Dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue)
- **Paper:** [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/abs/1909.05855)
- **Point of Contact:** [abhirast@google.com](abhirast@google.com)
### Dataset Summary
The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8).
The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather. For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios.
### Supported Tasks and Leaderboards
This dataset is designed to serve as an effective test-bed for intent prediction, slot filling, state tracking (i.e., estimating the user's goal) and language generation, among other tasks for large-scale virtual assistants:
- **Generative dialogue modeling** or `dialogue-modeling`: the text of the dialogues can be used to train a sequence model on the utterances. Performance on this task is typically evaluated with delexicalized-[BLEU](https://huggingface.co/metrics/bleu), inform rate and request success.
- **Intent state tracking**, a `multi-class-classification` task: predict the belief state of the user side of the conversation, performance is measured by [F1](https://huggingface.co/metrics/f1).
- **Action prediction**, a `parsing` task: parse an utterance into the corresponding dialog acts for the system to use. [F1](https://huggingface.co/metrics/f1) is typically reported.
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
- `dialogues` configuration (default): Each dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
- `schema` configuration: In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
### Data Fields
Each dialog instance has the following fields:
- `dialogue_id`: A unique identifier for a dialogue.
- `services`: A list of services present in the dialogue.
- `turns`: A list of annotated system or user utterances. Each turn consists of the following fields:
- `speaker`: The speaker for the turn. Either `USER` or `SYSTEM`.
- `utterance`: A string containing the natural language utterance.
- `frames`: A list of frames, each frame containing annotations for a single service and consists of the following fields:
- `service`: The name of the service corresponding to the frame. The slots and intents used in the following fields are taken from the schema of this service.
- `slots`: A list of slot spans in the utterance, only provided for non-categorical slots. Each slot span contains the following fields:
- `slot`: The name of the slot.
- `start`: The index of the starting character in the utterance corresponding to the slot value.
- `exclusive_end`: The index of the character just after the last character corresponding to the slot value in the utterance.
- `actions`: A list of actions corresponding to the system. Each action has the following fields:
- `act`: The type of action.
- `slot`: (optional) A slot argument for some of the actions.
- `values`: (optional) A list of values assigned to the slot. If the values list is non-empty, then the slot must be present.
- `canonical_values`: (optional) The values in their canonicalized form as used by the service. It is a list of strings of the same length as values.
- `service_call`: (system turns only, optional) The request sent to the service. It consists of the following fields:
- `method`: The name of the intent or function of the service or API being executed.
- `parameters`: A pair of lists of the same lengths: `parameter_slot_name` contains slot names and `parameter_canonical_value` contains the corresponding values in their canonicalized form.
- `service_results`: (system turns only, optional) A list of entities containing the results obtained from the service. It is only available for turns in which a service call is made. Each entity is represented as a pair of lists of the same length: `service_slot_name` contains slot names and `service_canonical_value` contains the corresponding canonical values.
- `state`: (user turns only) The dialogue state corresponding to the service. It consists of the following fields:
- `active_intent`: The intent corresponding to the service of the frame which is currently being fulfilled by the system. It takes the value "NONE" if none of the intents are active.
- `requested_slots`: A list of slots requested by the user in the current turn.
- `slot_values`: A pair of lists of the same lengths: `slot_name` contains slot names and `slot_value_list` contains the corresponding lists of strings. For categorical slots, this list contains a single value assigned to the slot. For non-categorical slots, all the values in this list are spoken variations of each other and are equivalent (e.g, "6 pm", "six in the evening", "evening at 6" etc.).
The mapping from the action ID and the action name is the following:
0: AFFIRM
1: AFFIRM_INTENT
2: CONFIRM
3: GOODBYE
4: INFORM
5: INFORM_COUNT
6: INFORM_INTENT
7: NEGATE
8: NEGATE_INTENT
9: NOTIFY_FAILURE
10: NOTIFY_SUCCESS
11: OFFER
12: OFFER_INTENT
13: REQUEST
14: REQUEST_ALTS
15: REQ_MORE
16: SELECT
17: THANK_YOU
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|---------------------|------:|-----------:|------:|
| Number of dialogues | 16142 | 2482 | 4201 |
| Number of turns | 48426 | 7446 | 12603 |
## Dataset Creation
### Curation Rationale
The data was collected by first using a dialogue simulator to generate dialogue outlines first and then paraphrasing them to obtain natural utterances. Using a dialogue simulator ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase to create a diverse dataset, and dialogues can be generated with their annotation, as opposed to a Wizard-of-Oz setup which is prone to manual annotation errors.
### Source Data
#### Initial Data Collection and Normalization
The dialogue outlines are first generated by a simulator. The dialogue simulator interacts with the services to generate dialogue outlines. It consists of two
agents playing the roles of the user and the system, interacting with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. It is worth noting that the simulation automaton does not include any domain-specific constraints: all domain-specific constraints are encoded in the schema and scenario.
The dialogue paraphrasing framework then converts the outlines generated by the simulator into a natural conversation. Users may refer to the slot values in the dialogue acts in various different ways during the conversation, e.g., “los angeles” may be referred to as “LA” or “LAX”. To introduce these natural variations in the slot values, different slot values are replaced with a randomly selected variation while being kept consistent across user turns in a dialogue. The actions are then converted to pseudo-natural language utterances using a set of manually defined action-to-text templates, and the resulting utterances for the different actions in a turn are concatenated together.
Finally, the dialogue transformed by these steps is sent to the crowd workers to be reformulated into more natural language. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. The crowd workers are asked to exactly repeat the slot values in their paraphrases so that the span indices for the slots can be recovered via string matching.
#### Who are the source language producers?
The language structure is machine-generated, and the language realizations are produced by crowd workers. The dataset paper does not provide demographic information for the crowd workers.
### Annotations
#### Annotation process
The annotations are automatically obtained during the initial sampling process and by string matching after reformulation.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by a team of researchers working at Google Mountain View.
### Licensing Information
The dataset is released under CC BY-SA 4.0 license.
### Citation Information
For the DSCT8 task, please cite:
```
@article{corr/abs-2002-01359,
author = {Abhinav Rastogi and
Xiaoxue Zang and
Srinivas Sunkara and
Raghav Gupta and
Pranav Khaitan},
title = {Schema-Guided Dialogue State Tracking Task at {DSTC8}},
journal = {CoRR},
volume = {abs/2002.01359},
year = {2020},
url = {https://arxiv.org/abs/2002.01359},
archivePrefix = {arXiv},
eprint = {2002.01359}
}
```
For the initial release paper please cite:
```
@inproceedings{aaai/RastogiZSGK20,
author = {Abhinav Rastogi and
Xiaoxue Zang and
Srinivas Sunkara and
Raghav Gupta and
Pranav Khaitan},
title = {Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided
Dialogue Dataset},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8689--8696},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6394}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
subjqa | 2023-03-16T13:27:54.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|yelp_review_full",
"source_datasets:extended|other-amazon_reviews_ucsd",
"source_datasets:extended|other-tripadvisor_reviews",
"language:en",
"license:unknown",
"arxiv:2004.14283",
"region:us"
] | null | SubjQA is a question answering dataset that focuses on subjective questions and answers.
The dataset consists of roughly 10,000 questions over reviews from 6 different domains: books, movies, grocery,
electronics, TripAdvisor (i.e. hotels), and restaurants. | @inproceedings{bjerva20subjqa,
title = "SubjQA: A Dataset for Subjectivity and Review Comprehension",
author = "Bjerva, Johannes and
Bhutani, Nikita and
Golahn, Behzad and
Tan, Wang-Chiew and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = November,
year = "2020",
publisher = "Association for Computational Linguistics",
} | null | 6 | 1,711 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|yelp_review_full
- extended|other-amazon_reviews_ucsd
- extended|other-tripadvisor_reviews
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: subjqa
pretty_name: subjqa
dataset_info:
- config_name: books
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 2473128
num_examples: 1314
- name: test
num_bytes: 649413
num_examples: 345
- name: validation
num_bytes: 460214
num_examples: 256
download_size: 11384657
dataset_size: 3582755
- config_name: electronics
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 2123648
num_examples: 1295
- name: test
num_bytes: 608899
num_examples: 358
- name: validation
num_bytes: 419042
num_examples: 255
download_size: 11384657
dataset_size: 3151589
- config_name: grocery
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 1317488
num_examples: 1124
- name: test
num_bytes: 721827
num_examples: 591
- name: validation
num_bytes: 254432
num_examples: 218
download_size: 11384657
dataset_size: 2293747
- config_name: movies
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 2986348
num_examples: 1369
- name: test
num_bytes: 620513
num_examples: 291
- name: validation
num_bytes: 589663
num_examples: 261
download_size: 11384657
dataset_size: 4196524
- config_name: restaurants
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 1823331
num_examples: 1400
- name: test
num_bytes: 335453
num_examples: 266
- name: validation
num_bytes: 349354
num_examples: 267
download_size: 11384657
dataset_size: 2508138
- config_name: tripadvisor
features:
- name: domain
dtype: string
- name: nn_mod
dtype: string
- name: nn_asp
dtype: string
- name: query_mod
dtype: string
- name: query_asp
dtype: string
- name: q_reviews_id
dtype: string
- name: question_subj_level
dtype: int64
- name: ques_subj_score
dtype: float32
- name: is_ques_subjective
dtype: bool
- name: review_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer_subj_level
dtype: int64
- name: ans_subj_score
dtype: float32
- name: is_ans_subjective
dtype: bool
splits:
- name: train
num_bytes: 1575021
num_examples: 1165
- name: test
num_bytes: 689508
num_examples: 512
- name: validation
num_bytes: 312645
num_examples: 230
download_size: 11384657
dataset_size: 2577174
---
# Dataset Card for subjqa
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/lewtun/SubjQA
- **Paper:** https://arxiv.org/abs/2004.14283
- **Point of Contact:** [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com)
### Dataset Summary
SubjQA is a question answering dataset that focuses on subjective (as opposed to factual) questions and answers. The dataset consists of roughly **10,000** questions over reviews from 6 different domains: books, movies, grocery, electronics, TripAdvisor (i.e. hotels), and restaurants. Each question is paired with a review and a span is highlighted as the answer to the question (with some questions having no answer). Moreover, both questions and answer spans are assigned a _subjectivity_ label by annotators. Questions such as _"How much does this product weigh?"_ is a factual question (i.e., low subjectivity), while "Is this easy to use?" is a subjective question (i.e., high subjectivity).
In short, SubjQA provides a setting to study how well extractive QA systems perform on finding answer that are less factual and to what extent modeling subjectivity can improve the performance of QA systems.
_Note:_ Much of the information provided on this dataset card is taken from the README provided by the authors in their GitHub repository ([link](https://github.com/megagonlabs/SubjQA)).
To load a domain with `datasets` you can run the following:
```python
from datasets import load_dataset
# other options include: electronics, grocery, movies, restaurants, tripadvisor
dataset = load_dataset("subjqa", "books")
```
### Supported Tasks and Leaderboards
* `question-answering`: The dataset can be used to train a model for extractive question answering, which involves questions whose answer can be identified as a span of text in a review. Success on this task is typically measured by achieving a high Exact Match or F1 score. The BERT model that is first fine-tuned on SQuAD 2.0 and then further fine-tuned on SubjQA achieves the scores shown in the figure below.

### Languages
The text in the dataset is in English and the associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example from `books` domain is shown below:
```json
{
"answers": {
"ans_subj_score": [1.0],
"answer_start": [324],
"answer_subj_level": [2],
"is_ans_subjective": [true],
"text": ["This is a wonderfully written book"],
},
"context": "While I would not recommend this book to a young reader due to a couple pretty explicate scenes I would recommend it to any adult who just loves a good book. Once I started reading it I could not put it down. I hesitated reading it because I didn't think that the subject matter would be interesting, but I was so wrong. This is a wonderfully written book.",
"domain": "books",
"id": "0255768496a256c5ed7caed9d4e47e4c",
"is_ques_subjective": false,
"nn_asp": "matter",
"nn_mod": "interesting",
"q_reviews_id": "a907837bafe847039c8da374a144bff9",
"query_asp": "part",
"query_mod": "fascinating",
"ques_subj_score": 0.0,
"question": "What are the parts like?",
"question_subj_level": 2,
"review_id": "a7f1a2503eac2580a0ebbc1d24fffca1",
"title": "0002007770",
}
```
### Data Fields
Each domain and split consists of the following columns:
* ```title```: The id of the item/business discussed in the review.
* ```question```: The question (written based on a query opinion).
* ```id```: A unique id assigned to the question-review pair.
* ```q_reviews_id```: A unique id assigned to all question-review pairs with a shared question.
* ```question_subj_level```: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
* ```ques_subj_score```: The subjectivity score of the question computed using the [TextBlob](https://textblob.readthedocs.io/en/dev/) package.
* ```context```: The review (that mentions the neighboring opinion).
* ```review_id```: A unique id associated with the review.
* ```answers.text```: The span labeled by annotators as the answer.
* ```answers.answer_start```: The (character-level) start index of the answer span highlighted by annotators.
* ```is_ques_subjective```: A boolean subjectivity label derived from ```question_subj_level``` (i.e., scores below 4 are considered as subjective)
* ```answers.answer_subj_level```: The subjectiviy level of the answer span (on a 1 to 5 scale with 5 being the most subjective).
* ```answers.ans_subj_score```: The subjectivity score of the answer span computed usign the [TextBlob](https://textblob.readthedocs.io/en/dev/) package.
* ```answers.is_ans_subjective```: A boolean subjectivity label derived from ```answer_subj_level``` (i.e., scores below 4 are considered as subjective)
* ```domain```: The category/domain of the review (e.g., hotels, books, ...).
* ```nn_mod```: The modifier of the neighboring opinion (which appears in the review).
* ```nn_asp```: The aspect of the neighboring opinion (which appears in the review).
* ```query_mod```: The modifier of the query opinion (around which a question is manually written).
* ```query_asp```: The aspect of the query opinion (around which a question is manually written).
### Data Splits
The question-review pairs from each domain are split into training, development, and test sets. The table below shows the size of the dataset per each domain and split.
| Domain | Train | Dev | Test | Total |
|-------------|-------|-----|------|-------|
| TripAdvisor | 1165 | 230 | 512 | 1686 |
| Restaurants | 1400 | 267 | 266 | 1683 |
| Movies | 1369 | 261 | 291 | 1677 |
| Books | 1314 | 256 | 345 | 1668 |
| Electronics | 1295 | 255 | 358 | 1659 |
| Grocery | 1124 | 218 | 591 | 1725 |
Based on the subjectivity labels provided by annotators, one observes that 73% of the questions and 74% of the answers in the dataset are subjective. This provides a substantial number of subjective QA pairs as well as a reasonable number of factual questions to compare and constrast the performance of QA systems on each type of QA pairs.
Finally, the next table summarizes the average length of the question, the review, and the highlighted answer span for each category.
| Domain | Review Len | Question Len | Answer Len | % answerable |
|-------------|------------|--------------|------------|--------------|
| TripAdvisor | 187.25 | 5.66 | 6.71 | 78.17 |
| Restaurants | 185.40 | 5.44 | 6.67 | 60.72 |
| Movies | 331.56 | 5.59 | 7.32 | 55.69 |
| Books | 285.47 | 5.78 | 7.78 | 52.99 |
| Electronics | 249.44 | 5.56 | 6.98 | 58.89 |
| Grocery | 164.75 | 5.44 | 7.25 | 64.69 |
## Dataset Creation
### Curation Rationale
Most question-answering datasets like SQuAD and Natural Questions focus on answering questions over factual data such as Wikipedia and news articles. However, in domains like e-commerce the questions and answers are often _subjective_, that is, they depend on the personal experience of the users. For example, a customer on Amazon may ask "Is the sound quality any good?", which is more difficult to answer than a factoid question like "What is the capital of Australia?" These considerations motivate the creation of SubjQA as a tool to investigate the relationship between subjectivity and question-answering.
### Source Data
#### Initial Data Collection and Normalization
The SubjQA dataset is constructed based on publicly available review datasets. Specifically, the _movies_, _books_, _electronics_, and _grocery_ categories are constructed using reviews from the [Amazon Review dataset](http://jmcauley.ucsd.edu/data/amazon/links.html). The _TripAdvisor_ category, as the name suggests, is constructed using reviews from TripAdvisor which can be found [here](http://times.cs.uiuc.edu/~wang296/Data/). Finally, the _restaurants_ category is constructed using the [Yelp Dataset](https://www.yelp.com/dataset) which is also publicly available.
The process of constructing SubjQA is discussed in detail in the [paper](https://arxiv.org/abs/2004.14283). In a nutshell, the dataset construction consists of the following steps:
1. First, all _opinions_ expressed in reviews are extracted. In the pipeline, each opinion is modeled as a (_modifier_, _aspect_) pair which is a pair of spans where the former describes the latter. (good, hotel), and (terrible, acting) are a few examples of extracted opinions.
2. Using Matrix Factorization techniques, implication relationships between different expressed opinions are mined. For instance, the system mines that "responsive keys" implies "good keyboard". In our pipeline, we refer to the conclusion of an implication (i.e., "good keyboard" in this examples) as the _query_ opinion, and we refer to the premise (i.e., "responsive keys") as its _neighboring_ opinion.
3. Annotators are then asked to write a question based on _query_ opinions. For instance given "good keyboard" as the query opinion, they might write "Is this keyboard any good?"
4. Each question written based on a _query_ opinion is then paired with a review that mentions its _neighboring_ opinion. In our example, that would be a review that mentions "responsive keys".
5. The question and review pairs are presented to annotators to select the correct answer span, and rate the subjectivity level of the question as well as the subjectivity level of the highlighted answer span.
A visualisation of the data collection pipeline is shown in the image below.

#### Who are the source language producers?
As described above, the source data for SubjQA is customer reviews of products and services on e-commerce websites like Amazon and TripAdvisor.
### Annotations
#### Annotation process
The generation of questions and answer span labels were obtained through the [Appen](https://appen.com/) platform. From the SubjQA paper:
> The platform provides quality control by showing the workers 5 questions at a time, out of which one is labeled by the experts. A worker who fails to maintain 70% accuracy is kicked out by the platform and his judgements are ignored ... To ensure good quality labels, we paid each worker 5 cents per annotation.
The instructions for generating a question are shown in the following figure:
<img width="874" alt="ques_gen" src="https://user-images.githubusercontent.com/26859204/117259092-03d67300-ae4e-11eb-81f2-9077fee1085f.png">
Similarly, the interface for the answer span and subjectivity labelling tasks is shown below:

As described in the SubjQA paper, the workers assign subjectivity scores (1-5) to each question and the selected answer span. They can also indicate if a question cannot be answered from the given review.
#### Who are the annotators?
Workers on the Appen platform.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The SubjQA dataset can be used to develop question-answering systems that can provide better on-demand answers to e-commerce customers who are interested in subjective questions about products and services.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The people involved in creating the SubjQA dataset are the authors of the accompanying paper:
* Johannes Bjerva1, Department of Computer Science, University of Copenhagen, Department of Computer Science, Aalborg University
* Nikita Bhutani, Megagon Labs, Mountain View
* Behzad Golshan, Megagon Labs, Mountain View
* Wang-Chiew Tan, Megagon Labs, Mountain View
* Isabelle Augenstein, Department of Computer Science, University of Copenhagen
### Licensing Information
The SubjQA dataset is provided "as-is", and its creators make no representation as to its accuracy.
The SubjQA dataset is constructed based on the following datasets and thus contains subsets of their data:
* [Amazon Review Dataset](http://jmcauley.ucsd.edu/data/amazon/links.html) from UCSD
* Used for _books_, _movies_, _grocery_, and _electronics_ domains
* [The TripAdvisor Dataset](http://times.cs.uiuc.edu/~wang296/Data/) from UIUC's Database and Information Systems Laboratory
* Used for the _TripAdvisor_ domain
* [The Yelp Dataset](https://www.yelp.com/dataset)
* Used for the _restaurants_ domain
Consequently, the data within each domain of the SubjQA dataset should be considered under the same license as the dataset it was built upon.
### Citation Information
If you are using the dataset, please cite the following in your work:
```
@inproceedings{bjerva20subjqa,
title = "SubjQA: A Dataset for Subjectivity and Review Comprehension",
author = "Bjerva, Johannes and
Bhutani, Nikita and
Golahn, Behzad and
Tan, Wang-Chiew and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = November,
year = "2020",
publisher = "Association for Computational Linguistics",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. |
civil_comments | 2023-06-30T11:26:30.000Z | [
"language:en",
"license:cc0-1.0",
"arxiv:1903.04561",
"region:us"
] | null | The comments in this dataset come from an archive of the Civil Comments
platform, a commenting plugin for independent news sites. These public comments
were created from 2015 - 2017 and appeared on approximately 50 English-language
news sites across the world. When Civil Comments shut down in 2017, they chose
to make the public comments available in a lasting open archive to enable future
research. The original data, published on figshare, includes the public comment
text, some associated metadata such as article IDs, timestamps and
commenter-generated "civility" labels, but does not include user ids. Jigsaw
extended this dataset by adding additional labels for toxicity and identity
mentions. This data set is an exact replica of the data released for the
Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This
dataset is released under CC0, as is the underlying comment text. | @article{DBLP:journals/corr/abs-1903-04561,
author = {Daniel Borkan and
Lucas Dixon and
Jeffrey Sorensen and
Nithum Thain and
Lucy Vasserman},
title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
Classification},
journal = {CoRR},
volume = {abs/1903.04561},
year = {2019},
url = {http://arxiv.org/abs/1903.04561},
archivePrefix = {arXiv},
eprint = {1903.04561},
timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 3 | 1,709 | ---
language:
- en
paperswithcode_id: null
pretty_name: CivilComments
dataset_info:
features:
- name: text
dtype: string
- name: toxicity
dtype: float32
- name: severe_toxicity
dtype: float32
- name: obscene
dtype: float32
- name: threat
dtype: float32
- name: insult
dtype: float32
- name: identity_attack
dtype: float32
- name: sexual_explicit
dtype: float32
splits:
- name: test
num_bytes: 32073013
num_examples: 97320
- name: train
num_bytes: 596835730
num_examples: 1804874
- name: validation
num_bytes: 32326369
num_examples: 97320
download_size: 414947977
dataset_size: 661235112
license: cc0-1.0
---
# Dataset Card for "civil_comments"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 414.95 MB
- **Size of the generated dataset:** 661.23 MB
- **Total amount of disk used:** 1.08 GB
### Dataset Summary
The comments in this dataset come from an archive of the Civil Comments
platform, a commenting plugin for independent news sites. These public comments
were created from 2015 - 2017 and appeared on approximately 50 English-language
news sites across the world. When Civil Comments shut down in 2017, they chose
to make the public comments available in a lasting open archive to enable future
research. The original data, published on figshare, includes the public comment
text, some associated metadata such as article IDs, timestamps and
commenter-generated "civility" labels, but does not include user ids. Jigsaw
extended this dataset by adding additional labels for toxicity and identity
mentions. This data set is an exact replica of the data released for the
Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This
dataset is released under CC0, as is the underlying comment text.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 414.95 MB
- **Size of the generated dataset:** 661.23 MB
- **Total amount of disk used:** 1.08 GB
An example of 'validation' looks as follows.
```
{
"identity_attack": 0.0,
"insult": 0.0,
"obscene": 0.0,
"severe_toxicity": 0.0,
"sexual_explicit": 0.0,
"text": "The public test.",
"threat": 0.0,
"toxicity": 0.0
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `toxicity`: a `float32` feature.
- `severe_toxicity`: a `float32` feature.
- `obscene`: a `float32` feature.
- `threat`: a `float32` feature.
- `insult`: a `float32` feature.
- `identity_attack`: a `float32` feature.
- `sexual_explicit`: a `float32` feature.
### Data Splits
| name | train |validation|test |
|-------|------:|---------:|----:|
|default|1804874| 97320|97320|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
This dataset is released under [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@article{DBLP:journals/corr/abs-1903-04561,
author = {Daniel Borkan and
Lucas Dixon and
Jeffrey Sorensen and
Nithum Thain and
Lucy Vasserman},
title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
Classification},
journal = {CoRR},
volume = {abs/1903.04561},
year = {2019},
url = {http://arxiv.org/abs/1903.04561},
archivePrefix = {arXiv},
eprint = {1903.04561},
timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
google/MusicCaps | 2023-03-08T14:37:09.000Z | [
"task_categories:text-to-speech",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2301.11325",
"region:us"
] | google | null | null | null | 76 | 1,707 | ---
license:
- cc-by-sa-4.0
converted_from: kaggle
kaggle_id: googleai/musiccaps
task_categories:
- text-to-speech
language:
- en
---
# Dataset Card for MusicCaps
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/googleai/musiccaps
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MusicCaps dataset contains **5,521 music examples, each of which is labeled with an English *aspect list* and a *free text caption* written by musicians**. An aspect list is for example *"pop, tinny wide hi hats, mellow piano melody, high pitched female vocal melody, sustained pulsating synth lead"*, while the caption consists of multiple sentences about the music, e.g.,
*"A low sounding male voice is rapping over a fast paced drums playing a reggaeton beat along with a bass. Something like a guitar is playing the melody along. This recording is of poor audio-quality. In the background a laughter can be noticed. This song may be playing in a bar."*
The text is solely focused on describing *how* the music sounds, not the metadata like the artist name.
The labeled examples are 10s music clips from the [**AudioSet**](https://research.google.com/audioset/) dataset (2,858 from the eval and 2,663 from the train split).
Please cite the corresponding paper, when using this dataset: http://arxiv.org/abs/2301.11325 (DOI: `10.48550/arXiv.2301.11325`)
### Dataset Usage
The published dataset takes the form of a `.csv` file that contains the ID of YouTube videos and their start/end stamps. In order to use this dataset, one must download the corresponding YouTube videos and chunk them according to the start/end times.
The following repository has an example script and notebook to load the clips. The notebook also includes a Gradio demo that helps explore some samples: https://github.com/nateraw/download-musiccaps-dataset
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### ytid
YT ID pointing to the YouTube video in which the labeled music segment appears. You can listen to the segment by opening https://youtu.be/watch?v={ytid}&start={start_s}
#### start_s
Position in the YouTube video at which the music starts.
#### end_s
Position in the YouTube video at which the music end. All clips are 10s long.
#### audioset_positive_labels
Labels for this segment from the AudioSet (https://research.google.com/audioset/) dataset.
#### aspect_list
A list of aspects describing the music.
#### caption
A multi-sentence free text caption describing the music.
#### author_id
An integer for grouping samples by who wrote them.
#### is_balanced_subset
If this value is true, the row is a part of the 1k subset which is genre-balanced.
#### is_audioset_eval
If this value is true, the clip is from the AudioSet eval split. Otherwise it is from the AudioSet train split.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@googleai](https://ai.google/research/)
### Licensing Information
The license for this dataset is cc-by-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
mteb/amazon_massive_scenario | 2022-05-19T08:00:44.000Z | [
"region:us"
] | mteb | MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations
for the Natural Language Understanding tasks of intent prediction and slot annotation.
Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing
the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions. | null | null | 0 | 1,706 | Entry not found |
lamini/alpaca | 2023-07-23T06:29:21.000Z | [
"region:us"
] | lamini | null | null | null | 1 | 1,703 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 27364517
num_examples: 52002
download_size: 12742513
dataset_size: 27364517
---
# Dataset Card for "alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/Alzheimer_MRI | 2023-07-04T10:03:44.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | Falah | null | null | null | 1 | 1,666 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Mild_Demented
'1': Moderate_Demented
'2': Non_Demented
'3': Very_Mild_Demented
splits:
- name: train
num_bytes: 22560791.2
num_examples: 5120
- name: test
num_bytes: 5637447.08
num_examples: 1280
download_size: 28289848
dataset_size: 28198238.28
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- medical
pretty_name: Alzheimer_MRI Disease Classification Dataset
size_categories:
- 1K<n<10K
---
# Alzheimer_MRI Disease Classification Dataset
The Falah/Alzheimer_MRI Disease Classification dataset is a valuable resource for researchers and health medicine applications. This dataset focuses on the classification of Alzheimer's disease based on MRI scans. The dataset consists of brain MRI images labeled into four categories:
- '0': Mild_Demented
- '1': Moderate_Demented
- '2': Non_Demented
- '3': Very_Mild_Demented
## Dataset Information
- Train split:
- Name: train
- Number of bytes: 22,560,791.2
- Number of examples: 5,120
- Test split:
- Name: test
- Number of bytes: 5,637,447.08
- Number of examples: 1,280
- Download size: 28,289,848 bytes
- Dataset size: 28,198,238.28 bytes
## Citation
If you use this dataset in your research or health medicine applications, we kindly request that you cite the following publication:
```
@dataset{alzheimer_mri_dataset,
author = {Falah.G.Salieh},
title = {Alzheimer MRI Dataset},
year = {2023},
publisher = {Hugging Face},
version = {1.0},
url = {https://huggingface.co/datasets/Falah/Alzheimer_MRI}
}
```
## Usage Example
Here's an example of how to load the dataset using the Hugging Face library:
```python
from datasets import load_dataset
# Load the Falah/Alzheimer_MRI dataset
dataset = load_dataset('Falah/Alzheimer_MRI', split='train')
# Print the number of examples and the first few samples
print("Number of examples:", len(dataset))
print("Sample data:")
for example in dataset[:5]:
print(example)
``` |
jordyvl/rvl_cdip_100_examples_per_class | 2023-03-23T20:55:18.000Z | [
"region:us"
] | jordyvl | null | null | null | 0 | 1,664 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: train
num_bytes: 97000316.76
num_examples: 800
- name: test
num_bytes: 48612840.21
num_examples: 400
- name: validation
num_bytes: 48666549.76
num_examples: 400
download_size: 180034173
dataset_size: 194279706.73
---
# Dataset Card for "rvl_cdip_100_examples_per_class"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/testing_self_instruct_small | 2023-04-12T21:53:16.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 1,664 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 20379
num_examples: 100
- name: test
num_bytes: 26586
num_examples: 100
download_size: 35875
dataset_size: 46965
---
# Dataset Card for "testing_self_instruct_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
meczifho/QuaeroFrenchMed | 2023-09-13T20:01:06.000Z | [
"task_categories:token-classification",
"language:fr",
"medical",
"region:us"
] | meczifho | The QUAEROFrenchMed is a manually annotated corpus developed as a resource for named entity named recognition and normalization. | @article{neveol2014quaero,
title={The QUAERO French medical corpus: A ressource for medical entity recognition and normalization},
author={N{\'e}v{\'e}ol, Aur{\'e}lie and Grouin, Cyril and Leixa, Jeremy and Rosset, Sophie and Zweigenbaum, Pierre},
journal={Proc of BioTextMining Work},
pages={24--30},
year={2014}
} | null | 1 | 1,664 | ---
language:
- fr
task_categories:
- token-classification
tags:
- medical
---
⚠️ **WARNING : THIS VERSION OF THE DATASET IS MODIFIED IN FORMAT AND CONTENT FROM THE ORIGINAL DATASET AVAILABLE [HERE](https://quaerofrenchmed.limsi.fr/). NESTED ENTITIES HAVE BEEN REMOVED AND THIS DATASET ONLY RETAINS THE LARGEST OF NESTED ENTITIES. OVERALL, THIS CORRESPONDS TO 80% OF THE ENTITIES ANNOTATED IN THE ORIGINAL DATASET.** ⚠️
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy (ANAT), Chemical and Drugs (CHEM), Devices (DEVI), Disorders (DISO), Geographic Areas (GEOG), Living Beings (LIVB), Objects (OBJC), Phenomena (PHEN), Physiology (PHYS), Procedures (PROC).
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
For more details, please refer to [the official webpage](https://quaerofrenchmed.limsi.fr/).
⚠️ **WARNING : THIS VERSION OF THE DATASET IS MODIFIED IN FORMAT AND CONTENT FROM THE ORIGINAL DATASET AVAILABLE [HERE](https://quaerofrenchmed.limsi.fr/). NESTED ENTITIES HAVE BEEN REMOVED AND THIS DATASET ONLY RETAINS THE LARGEST OF NESTED ENTITIES. OVERALL, THIS CORRESPONDS TO 80% OF THE ENTITIES ANNOTATED IN THE ORIGINAL DATASET.** ⚠️
In this format, each word of the sentence has an associated ner_tag, corresponding to the type of clinical entity, here is the mapping :
```
0: "O"
1: "DISO"
2: "PROC"
3: "ANAT"
4: "LIVB"
5: "CHEM"
6: "PHYS"
7: "PHEN"
8: "GEOG"
9: "DEVI"
10: "OBJC"
```
[1] Névéol A, Grouin C, Leixa J, Rosset S, Zweigenbaum P. The QUAERO French Medical Corpus: A Ressource for Medical Entity Recognition and Normalization. Fourth Workshop on Building and Evaluating Ressources for Health and Biomedical Text Processing - BioTxtM2014. 2014:24-30
[2] Névéol A, Grouin C, Tannier X, Hamon T, Kelly L, Goeuriot L, Zweigenbaum P. (2015) Task 1b of the CLEF eHealth Evaluation Lab 2015: Clinical Named Entity Recognition. CLEF 2015 Evaluation Labs and Workshop: Online Working Notes, CEUR-WS, September, 2015.
[3] Névéol A, Cohen, KB, Grouin C, Hamon T, Lavergne T, Kelly L, Goeuriot L, Rey G, Robert A, Tannier X, Zweigenbaum P. Clinical Information Extraction at the CLEF eHealth Evaluation lab 2016. CLEF 2016, Online Working Notes, CEUR-WS 1609.2016:28-42. |
clinc_oos | 2023-01-25T14:28:10.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"region:us"
] | null | This dataset is for evaluating the performance of intent classification systems in the
presence of "out-of-scope" queries. By "out-of-scope", we mean queries that do not fall
into any of the system-supported intent classes. Most datasets include only data that is
"in-scope". Our dataset includes both in-scope and out-of-scope data. You might also know
the term "out-of-scope" by other terms, including "out-of-domain" or "out-of-distribution". | @inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
} | null | 11 | 1,650 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
paperswithcode_id: clinc150
pretty_name: CLINC150
dataset_info:
- config_name: small
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 394128
num_examples: 7600
- name: validation
num_bytes: 160302
num_examples: 3100
- name: test
num_bytes: 286970
num_examples: 5500
download_size: 1702451
dataset_size: 841400
- config_name: imbalanced
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 546909
num_examples: 10625
- name: validation
num_bytes: 160302
num_examples: 3100
- name: test
num_bytes: 286970
num_examples: 5500
download_size: 2016773
dataset_size: 994181
- config_name: plus
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 791255
num_examples: 15250
- name: validation
num_bytes: 160302
num_examples: 3100
- name: test
num_bytes: 286970
num_examples: 5500
download_size: 2509789
dataset_size: 1238527
---
# Dataset Card for CLINC150
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/clinc/oos-eval/)
- **Repository:** [Github](https://github.com/clinc/oos-eval/)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1131)
- **Leaderboard:** [PapersWithCode](https://paperswithcode.com/sota/text-classification-on-clinc-oos)
- **Point of Contact:**
### Dataset Summary
Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.
### Supported Tasks and Leaderboards
- `intent-classification`: This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. [here](https://paperswithcode.com/sota/text-classification-on-clinc-oos).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'text' : 'can you walk me through setting up direct deposits to my bank of internet savings account',
'label' : 108
}
```
### Data Fields
- text : Textual data
- label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent.
The Label Id to Label Name map is mentioned in the table below:
| **Label Id** | **Label name** |
|--- |--- |
| 0 | restaurant_reviews |
| 1 | nutrition_info |
| 2 | account_blocked |
| 3 | oil_change_how |
| 4 | time |
| 5 | weather |
| 6 | redeem_rewards |
| 7 | interest_rate |
| 8 | gas_type |
| 9 | accept_reservations |
| 10 | smart_home |
| 11 | user_name |
| 12 | report_lost_card |
| 13 | repeat |
| 14 | whisper_mode |
| 15 | what_are_your_hobbies |
| 16 | order |
| 17 | jump_start |
| 18 | schedule_meeting |
| 19 | meeting_schedule |
| 20 | freeze_account |
| 21 | what_song |
| 22 | meaning_of_life |
| 23 | restaurant_reservation |
| 24 | traffic |
| 25 | make_call |
| 26 | text |
| 27 | bill_balance |
| 28 | improve_credit_score |
| 29 | change_language |
| 30 | no |
| 31 | measurement_conversion |
| 32 | timer |
| 33 | flip_coin |
| 34 | do_you_have_pets |
| 35 | balance |
| 36 | tell_joke |
| 37 | last_maintenance |
| 38 | exchange_rate |
| 39 | uber |
| 40 | car_rental |
| 41 | credit_limit |
| 42 | oos |
| 43 | shopping_list |
| 44 | expiration_date |
| 45 | routing |
| 46 | meal_suggestion |
| 47 | tire_change |
| 48 | todo_list |
| 49 | card_declined |
| 50 | rewards_balance |
| 51 | change_accent |
| 52 | vaccines |
| 53 | reminder_update |
| 54 | food_last |
| 55 | change_ai_name |
| 56 | bill_due |
| 57 | who_do_you_work_for |
| 58 | share_location |
| 59 | international_visa |
| 60 | calendar |
| 61 | translate |
| 62 | carry_on |
| 63 | book_flight |
| 64 | insurance_change |
| 65 | todo_list_update |
| 66 | timezone |
| 67 | cancel_reservation |
| 68 | transactions |
| 69 | credit_score |
| 70 | report_fraud |
| 71 | spending_history |
| 72 | directions |
| 73 | spelling |
| 74 | insurance |
| 75 | what_is_your_name |
| 76 | reminder |
| 77 | where_are_you_from |
| 78 | distance |
| 79 | payday |
| 80 | flight_status |
| 81 | find_phone |
| 82 | greeting |
| 83 | alarm |
| 84 | order_status |
| 85 | confirm_reservation |
| 86 | cook_time |
| 87 | damaged_card |
| 88 | reset_settings |
| 89 | pin_change |
| 90 | replacement_card_duration |
| 91 | new_card |
| 92 | roll_dice |
| 93 | income |
| 94 | taxes |
| 95 | date |
| 96 | who_made_you |
| 97 | pto_request |
| 98 | tire_pressure |
| 99 | how_old_are_you |
| 100 | rollover_401k |
| 101 | pto_request_status |
| 102 | how_busy |
| 103 | application_status |
| 104 | recipe |
| 105 | calendar_update |
| 106 | play_music |
| 107 | yes |
| 108 | direct_deposit |
| 109 | credit_limit_change |
| 110 | gas |
| 111 | pay_bill |
| 112 | ingredients_list |
| 113 | lost_luggage |
| 114 | goodbye |
| 115 | what_can_i_ask_you |
| 116 | book_hotel |
| 117 | are_you_a_bot |
| 118 | next_song |
| 119 | change_speed |
| 120 | plug_type |
| 121 | maybe |
| 122 | w2 |
| 123 | oil_change_when |
| 124 | thank_you |
| 125 | shopping_list_update |
| 126 | pto_balance |
| 127 | order_checks |
| 128 | travel_alert |
| 129 | fun_fact |
| 130 | sync_device |
| 131 | schedule_maintenance |
| 132 | apr |
| 133 | transfer |
| 134 | ingredient_substitution |
| 135 | calories |
| 136 | current_location |
| 137 | international_fees |
| 138 | calculator |
| 139 | definition |
| 140 | next_holiday |
| 141 | update_playlist |
| 142 | mpg |
| 143 | min_payment |
| 144 | change_user_name |
| 145 | restaurant_suggestion |
| 146 | travel_notification |
| 147 | cancel |
| 148 | pto_used |
| 149 | travel_suggestion |
| 150 | change_volume |
### Data Splits
The dataset comes in different subsets:
- `small` : Small, in which there are only 50 training queries per each in-scope intent
- `imbalanced` : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries.
- `plus`: OOS+, in which there are 250 out-of-scope training examples, rather than 100.
| name |train|validation|test|
|----------|----:|---------:|---:|
|small|7600| 3100| 5500 |
|imbalanced|10625| 3100| 5500|
|plus|15250| 3100| 5500|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
}
```
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
TREC-AToMiC/AToMiC-Qrels-v0.2 | 2023-02-14T21:31:18.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | TREC-AToMiC | null | null | null | 1 | 1,646 | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: Q0
dtype: string
- name: image_id
dtype: string
- name: rel
dtype: int64
splits:
- name: test
num_bytes: 789840
num_examples: 9873
- name: validation
num_bytes: 1424080
num_examples: 17801
- name: train
num_bytes: 352152240
num_examples: 4401903
download_size: 205636566
dataset_size: 354366160
license: cc-by-sa-4.0
---
# Dataset Card for "AToMiC-Qrels-v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dream | 2022-11-18T19:59:12.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding. | @article{sundream2018,
title={{DREAM}: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension},
author={Sun, Kai and Yu, Dian and Chen, Jianshu and Yu, Dong and Choi, Yejin and Cardie, Claire},
journal={Transactions of the Association for Computational Linguistics},
year={2019},
url={https://arxiv.org/abs/1902.00164v1}
} | null | 6 | 1,645 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: dream
pretty_name: DREAM
dataset_info:
features:
- name: id
dtype: int32
- name: dialogue_id
dtype: string
- name: dialogue
sequence: string
- name: question
dtype: string
- name: choice
sequence: string
- name: answer
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 4775235
num_examples: 6116
- name: validation
num_bytes: 1539272
num_examples: 2040
- name: test
num_bytes: 1556379
num_examples: 2041
download_size: 5558190
dataset_size: 7870886
---
# Dataset Card for DREAM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
kernelmachine/open-license-corpus | 2023-08-09T03:14:36.000Z | [
"task_categories:text-generation",
"size_categories:100B<n<1T",
"language:en",
"license:apache-2.0",
"region:us"
] | kernelmachine | null | null | null | 6 | 1,632 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: pubtext
size_categories:
- 100B<n<1T
---
# PubText
Welcome to the Open License Corpus (OLC), a 228B token corpus for training permissively-licensed language models.
**Disclaimer**: OLC should not be considered a universally safe-to-use dataset. We encourage users of OLC to consult a legal professional on the suitability of each data source for their application.
## Dataset Description
- **Repository:** [Silo LM repository](https://github.com/kernelmachine/silo-lm)
- **Paper:** [Silo LM paper](https://github.com/kernelmachine/silo-lm)
- **Point of Contact:** [Suchin Gururangan](mailto:sg01@cs.washington.edu)
### Dataset Summary
| Domain | Sources | Specific License | # BPE Tokens (in billions; GPT-NeoX tokenizer) |
|--------------|------------------------------------------------------|------------------|------------------|
| Legal | Case Law, Pile of Law (PD subset) | Public Domain | 27.1 |
| Legal | Pile of Law (CC BY-SA subset) | CC BY-SA | 0.07 |
| Code | Github (permissive) | MIT/BSD/Apache | 58.9 |
| Conversational| HackerNews, Ubuntu IRC | MIT/Apache | 5.9 |
| Conversational | Stack Overflow, Stack Exchange | CC BY-SA | 21.3 |
| Math | Deepmind Math, AMPS | Apache | 3.5 |
| Science | ArXiv abstracts, S2ORC (PD subset) | Public Domain | 1.2 |
| Science | S2ORC (CC BY-SA subset) | CC BY-SA | 70.3 |
| Books | Gutenberg | Public Domain | 2.9 |
| News | Public domain news | Public Domain | 0.2 |
| News | Wikinews | CC BY-SA | 0.01 |
| Encyclopedic | Wikipedia | CC BY-SA | 37.0 |
### Supported Tasks and Leaderboards
- `text-generation`: The dataset can be used to train a language model for text generation. The language model performance is evaluated based on perplexity.
### Languages
OLC is primarily an English-language dataset, but also contains some data in other languages (primarily in the Wikipedia subset, which draws on the [Red Pajama](https://github.com/togethercomputer/RedPajama-Data) data collection)
## Dataset Structure
The dataset is a standard text-only structure, separated into each subset that we include in the paper.
```
from datasets import load_dataset
dataset = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']
```
To use a collection of sources, you should specify each individually and interleave, like so:
```
from datasets import interleave_datasets, load_dataset
d1 = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']
d2 = load_dataset('kernelmachine/open-license-corpus', 'sw_github', streaming=True)['train']
d1_d2 = interleave_datasets([d1,d2], probabilities=[0.8, 0.2], seed=42)
```
### Data Instances and Fields
The dataset is standard text only structure, e.g. `{"text": "this is a document"}`. We do not add any other fields to documents.
### Data Splits
We only include the training data in this repository.
For validation data, in the paper we use the Pile validation data, which we decontaminate OLC against using a deduplication script (see more below).
The Pile validation data that we use in the paper can be found [here]().
## Dataset Creation
### License Taxonomy
* **Public Domain (PD):** Public domain text has no restrictions.
* **Permissively licensed software (SW):** including MIT, Apache, and BSD software.
* **Attribution licenses (BY):** such as Creative Commons Attribution (CC-BY) are free to use as long as "credit is given to the creator."
* **All other data:** that is not in one of the above three categories is assumed to be non-permissive. This includes: any text that is explicitly protected by copyright or licenses that are non-commercial (e.g., CC-NC), any software without clear MIT, BSD, or Apache licenses, and any generic web-crawled data where the license or copyright information may be unclear.
### Building OLC
Based on this taxonomy of licenses OLC, a 228B token corpus of PD, SW, and BY data. OLC consists of 17 manually-selected sources of
primarily English text that are under permissive licenses.
The text generally falls into eight different domains:
* **Legal:** We curate legal text from the Pile of Law, an amalgation of 31 different sources of text related to civil court cases, patents, and other legal and governmental works, either licensed as public domain or CC-BY. We also gather public domain text from the Case Law Access Project, which covers over 6.5 million decisions published by state and federal courts throughout U.S. history.
* **Code:** We use the Github subset of the RedPajama dataset, which contains code from Github repositories with three permissive software licenses: MIT, Apache, and BSD.
* **Conversation:** We source conversational text under permissive software licenses from the HackerNews (MIT license) and the Ubuntu IRC (Apache license) subsets of the Pile. We also use the Stackexchange subset of the RedPajama dataset and a Stackoverflow corpus from Kaggle, both under the CC-BY-SA license.
* **Math:** We source mathematical text from the Deepmind Mathematics and the AMPS datasets, both of which are under the Apache license.
* **Science:** We source scientific text from ArXiv abstracts that are in the public domain. We also collect full-text articles from the Semantic Scholar Research Corpus (S2ORC), either licensed as public domain or CC-BY.
* **Books:** We source books from the Gutenberg corpus, which are copyright-expired books that are in the public domain.
* **News:** We collect public domain news text from the English subset of the MOT corpus. We also collect text from Wikinews, which is under CC BY-SA.
* **Encyclopedic:** Finally, we include a large set of Wikipedia from the subset included in RedPajama.We follow RedPajama in using Wikipedia snapshots from 20 languages even though the model primarily focuses on English.
#### Initial Data Collection and Normalization
We deduplicate text using a document-level filter that considers $n$-gram overlap. We first deduplicate within each domain to remove redundant documents from similar sources (e.g. Case Law and the Pile of Law), and then then perform deduplication against the validation and test datasets of the Pile to avoid test leakage.
We do not perform any additional quality filtering, though some subsets (e.g. Github and Wikipedia) are already quality filtered by the original data curators of those subsets.
#### Who are the source language producers?
The source language producers vary by domain; the Legal subset primarily contains governmental documents, while the Github subset contains code repositories written by the public. We refer to each data source for further information.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
We do not perform additional filtering to remove personally identifiable information, so it is possible that certain subsets still pose privacy risks despite being permissively licensed.
## Considerations for Using the Data
Please see the disclaimer above. The license associated with a document may be time- and country-dependent Moreover, other legal constraints may prohibit the use of a data source despite a permissive data license. We encourage users of PubText to consult a legal professional on the suitability of each data source for their application.
### Social Impact of Dataset
OLC is the first multidomain, permissively licensed corpus, which can enable language models that align better to data-use regulations such as the fair-use doctrine in the United States and the GPDR in the European Union.
### Discussion of Biases and Limitations
While OLC mitigates copyright and privacy risks, it may exacerbate certain fairness issues, like toxicity towards marginalized groups and racial biases, especially due to the prevalence of older copyright-expired books in the training data.
In addition, OLC relies on explicit metadata to identify licenses, which may lead to underestimates of the amount and diversity of permissively licensed text actually available on the web.
### Dataset Curators
OLC was curated by the authors of SILO language models.
### Licensing Information
We release this corpus under the Apache 2.0 license.
### Citation Information
|
baber/agieval | 2023-08-30T00:47:50.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2304.06364",
"region:us"
] | baber | null | @ARTICLE{10174688,
author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding},
year={2023},
volume={},
number={},
pages={1-16},
doi={10.1109/TASLP.2023.3293046}} | null | 2 | 1,631 | ---
license: mit
language:
- en
task_categories:
- question-answering
- text-generation
pretty_name: AGIEval
---
# Dataset Card for AGIEval
## Dataset Description
- **Homepage:** https://github.com/microsoft/AGIEval/blob/main/README.md
- **Repository:** https://github.com/microsoft/AGIEval
- **Paper:** https://arxiv.org/abs/2304.06364
### Dataset Summary
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.
### Citation Information
Dataset taken from the AGIEval Repo.
```
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citation for each dataset:
```
@inproceedings{ling-etal-2017-program,
title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
author = "Ling, Wang and
Yogatama, Dani and
Dyer, Chris and
Blunsom, Phil",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1015",
doi = "10.18653/v1/P17-1015",
pages = "158--167",
abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
}
@inproceedings{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}
@inproceedings{Liu2020LogiQAAC,
title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},
author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},
booktitle={International Joint Conference on Artificial Intelligence},
year={2020}
}
@inproceedings{zhong2019jec,
title={JEC-QA: A Legal-Domain Question Answering Dataset},
author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},
booktitle={Proceedings of AAAI},
year={2020},
}
@article{Wang2021FromLT,
title={From LSAT: The Progress and Challenges of Complex Reasoning},
author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2021},
volume={30},
pages={2201-2216}
}
``` |
conceptual_captions | 2022-11-03T16:32:04.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | null | Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions.
In contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web,
and therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images.
The authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness,
informativeness, fluency, and learnability of the resulting captions. | @inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
} | null | 36 | 1,629 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: conceptual-captions
pretty_name: Conceptual Captions
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: caption
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 623230370
num_examples: 3318333
- name: validation
num_bytes: 2846024
num_examples: 15840
download_size: 0
dataset_size: 626076394
- config_name: unlabeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 584520156
num_examples: 3318333
- name: validation
num_bytes: 2698726
num_examples: 15840
download_size: 567211172
dataset_size: 587218882
- config_name: labeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
- name: labels
sequence: string
- name: MIDs
sequence: string
- name: confidence_scores
sequence: float64
splits:
- name: train
num_bytes: 1199330856
num_examples: 2007090
download_size: 1282463277
dataset_size: 1199330856
---
# Dataset Card for Conceptual Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
- **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
- **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
- **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("conceptual_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available [here](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard). Official submission output captions are scored against the reference captions from the hidden test set using [this](https://github.com/tylin/coco-caption) implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
#### `unlabeled`
Each instance in this configuration represents a single image with a caption:
```
{
'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
'caption': 'a very typical bus station'
}
```
#### `labeled`
Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:
```
{
'image_url': 'https://thumb1.shutterstock.com/display_pic_with_logo/261388/223876810/stock-vector-christmas-tree-on-a-black-background-vector-223876810.jpg',
'caption': 'christmas tree on a black background .',
'labels': ['christmas tree', 'christmas decoration', 'font', 'text', 'graphic design', 'illustration','interior design', 'tree', 'christmas eve', 'ornament', 'fir', 'plant', 'pine', 'pine family', 'graphics'],
'MIDs': ['/m/025nd', '/m/05fc9mj', '/m/03gq5hm', '/m/07s6nbt', '/m/03c31', '/m/01kr8f', '/m/0h8nzzj', '/m/07j7r', '/m/014r1s', '/m/05ykl4', '/m/016x4z', '/m/05s2s', '/m/09t57', '/m/01tfm0', '/m/021sdg'],
'confidence_scores': [0.9818305373191833, 0.952756941318512, 0.9227379560470581, 0.8524878621101379, 0.7597672343254089, 0.7493422031402588, 0.7332468628883362, 0.6869218349456787, 0.6552258133888245, 0.6357356309890747, 0.5992692708969116, 0.585474967956543, 0.5222904086112976, 0.5113164782524109, 0.5036579966545105]
}
```
### Data Fields
#### `unlabeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
#### `labeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `labels`: A sequence of machine-generated labels obtained using the [Google Cloud Vision API](https://cloud.google.com/vision).
- `MIDs`: A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.
- `confidence_scores`: A sequence of confidence scores denoting how likely the corresponing labels are present on the image.
### Data Splits
#### `unlabeled`
The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.
#### `labeled`
The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the `unlabeled` config.
## Dataset Creation
### Curation Rationale
From the paper:
> In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.
### Source Data
#### Initial Data Collection and Normalization
From the homepage:
>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.
>
>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.
>
>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.
>
>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.
>
>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.
>
>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.
>
>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.
#### Who are the source language producers?
Not specified.
### Annotations
#### Annotation process
Annotations are extracted jointly with the images using the automatic pipeline.
#### Who are the annotators?
Not specified.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) and [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
dlwh/wikitext_103_detokenized | 2022-05-05T20:08:17.000Z | [
"region:us"
] | dlwh | null | null | null | 2 | 1,624 | Entry not found |
openai/webgpt_comparisons | 2022-12-19T17:55:29.000Z | [
"arxiv:2112.09332",
"region:us"
] | openai | WebGPT Comparisons contains all of the comparisons marked as suitable for reward modelling from the WebGPT paper. | @inproceedings{nakano2021webgpt,
author = {Reiichiro Nakano and Jacob Hilton and Suchir Balaji and Jeff Wu and Long Ouyang and Christina Kim and Christopher Hesse and Shantanu Jain and Vineet Kosaraju and William Saunders and Xu Jiang and Karl Cobbe and Tyna Eloundou and Gretchen Krueger and Kevin Button and Matthew Knight and Benjamin Chess and John Schulman},
title = {WebGPT: Browser-assisted question-answering with human feedback},
booktitle = {arXiv},
year = 2021,
} | null | 172 | 1,620 | ---
pretty_name: WebGPT Comparisons
---
# Dataset Card for WebGPT Comparisons
## Dataset Description
In the [WebGPT paper](https://arxiv.org/abs/2112.09332), the authors trained a reward model from human feedback.
They used the reward model to train a long form question answering model to align with human preferences.
This is the dataset of all comparisons that were marked as suitable for reward modeling by the end of the WebGPT project.
There are 19,578 comparisons in total.
Each example in the dataset contains a pair of model answers for a question, and the associated metadata.
Each answer has a preference score from humans that can be used to determine which of the two answers are better.
Overall, an example has the following fields:
* `question`: The text of the question, together with the name of the dataset from which it was taken and a unique ID.
* `quotes_0`: The extracts that the model found while browsing for `answer_0`, together with the title of the page on which the extract was found, constructed from the HTML title and domain name of the page.
* `answer_0`: The final answer that the model composed using `quotes_0`.
* `tokens_0`: The prefix that would have been given to the model in the final step of the episode to create `answer_0`, and the completion given by the model or human. The prefix is made up of the question and the quotes, with some truncation, and the completion is simply the answer. Both are tokenized using the GPT-2 tokenizer. The concatenation of the prefix and completion is the input used for reward modeling.
* `score_0`: The strength of the preference for `answer_0` over `answer_1` as a number from −1 to 1. It sums to 0 with `score_1`, and an answer is preferred if and only if its score is positive. For reward modeling, we treat scores of 0 as soft 50% labels, and all other scores as hard labels (using only their sign).
* `quotes_1`: The counterpart to `quotes_0`.
* `answer_1`: The counterpart to `answer_0`.
* `tokens_1`: The counterpart to `tokens_0`.
* `score_1`: The counterpart to `score_0`.
This information was found in Appendix K of the WebGPT paper.
## Citation Information
[https://arxiv.org/abs/2112.09332](https://arxiv.org/abs/2112.09332)
```
@inproceedings{nakano2021webgpt,
author = {Reiichiro Nakano and Jacob Hilton and Suchir Balaji and Jeff Wu and Long Ouyang and Christina Kim and Christopher Hesse and Shantanu Jain and Vineet Kosaraju and William Saunders and Xu Jiang and Karl Cobbe and Tyna Eloundou and Gretchen Krueger and Kevin Button and Matthew Knight and Benjamin Chess and John Schulman},
title = {WebGPT: Browser-assisted question-answering with human feedback},
booktitle = {arXiv},
year = 2021,
}
```
Dataset added to the Hugging Face Hub by [@Tristan](https://huggingface.co/Tristan) and [@natolambert](https://huggingface.co/natolambert) |
beomi/KoAlpaca-v1.1a | 2023-05-26T06:32:02.000Z | [
"task_categories:text-generation",
"language:ko",
"KoAlpaca",
"region:us"
] | beomi | null | null | null | 10 | 1,620 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 23371027
num_examples: 21155
download_size: 12856014
dataset_size: 23371027
task_categories:
- text-generation
language:
- ko
tags:
- KoAlpaca
pretty_name: KoAlpaca-v1.1a
---
# Dataset Card for "KoAlpaca-v1.1a"
## Project Repo
- Github Repo: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("beomi/KoAlpaca-v1.1a", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}
``` |
totto | 2023-02-23T09:49:19.000Z | [
"task_categories:table-to-text",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2004.14373",
"region:us"
] | null | ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. | @inproceedings{parikh2020totto,
title={{ToTTo}: A Controlled Table-To-Text Generation Dataset},
author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of EMNLP},
year={2020}
} | null | 5 | 1,615 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
paperswithcode_id: totto
pretty_name: ToTTo
dataset_info:
features:
- name: id
dtype: int32
- name: table_page_title
dtype: string
- name: table_webpage_url
dtype: string
- name: table_section_title
dtype: string
- name: table_section_text
dtype: string
- name: table
list:
list:
- name: column_span
dtype: int32
- name: is_header
dtype: bool
- name: row_span
dtype: int32
- name: value
dtype: string
- name: highlighted_cells
sequence:
sequence: int32
- name: example_id
dtype: string
- name: sentence_annotations
sequence:
- name: original_sentence
dtype: string
- name: sentence_after_deletion
dtype: string
- name: sentence_after_ambiguity
dtype: string
- name: final_sentence
dtype: string
- name: overlap_subset
dtype: string
splits:
- name: train
num_bytes: 652754806
num_examples: 120761
- name: validation
num_bytes: 47277039
num_examples: 7700
- name: test
num_bytes: 40883586
num_examples: 7700
download_size: 187724372
dataset_size: 740915431
---
# Dataset Card for ToTTo
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://github.com/google-research-datasets/ToTTo
- **Paper:** https://arxiv.org/abs/2004.14373
- **Leaderboard:** https://github.com/google-research-datasets/ToTTo#leaderboard
- **Point of Contact:** [totto@google.com](mailto:totto@google.com)
### Dataset Summary
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled
generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A sample training set is provided below
```
{'example_id': '1762238357686640028',
'highlighted_cells': [[13, 2]],
'id': 0,
'overlap_subset': 'none',
'sentence_annotations': {'final_sentence': ['A Favorita is the telenovela aired in the 9 pm timeslot.'],
'original_sentence': ['It is also the first telenovela by the writer to air in the 9 pm timeslot.'],
'sentence_after_ambiguity': ['A Favorita is the telenovela aired in the 9 pm timeslot.'],
'sentence_after_deletion': ['It is the telenovela air in the 9 pm timeslot.']},
'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Run'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Title'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Chapters'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Author'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Director'},
{'column_span': 1,
'is_header': True,
'row_span': 1,
'value': 'Ibope Rating'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '59'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 5, 2000— February 2, 2001'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Laços de Família'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.9'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '60'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'February 5, 2001— September 28, 2001'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Porto dos Milagres'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva Ricardo Linhares'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Marcos Paulo Simões'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.6'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '61'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 1, 2001— June 14, 2002'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'O Clone'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '47.0'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '62'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 17, 2002— February 14, 2003'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Esperança'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Benedito Ruy Barbosa'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Luiz Fernando'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '37.7'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '63'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'February 17, 2003— October 10, 2003'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Mulheres Apaixonadas'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.6'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '64'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 13, 2003— June 25, 2004'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Celebridade'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Gilberto Braga'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Dennis Carvalho'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.0'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '65'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 28, 2004— March 11, 2005'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Senhora do Destino'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '50.4'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '66'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'March 14, 2005— November 4, 2005'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'América'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim Marcos Schechtman'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '49.4'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '67'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'November 7, 2005— July 7, 2006'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Belíssima'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Sílvio de Abreu'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Denise Saraceni'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '48.5'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '68'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'July 10, 2006— March 2, 2007'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Páginas da Vida'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '69'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'March 5, 2007— September 28, 2007'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Paraíso Tropical'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '179'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Gilberto Braga Ricardo Linhares'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Dennis Carvalho'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '42.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '70'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'October 1, 2007— May 31, 2008'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Duas Caras'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '210'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Aguinaldo Silva'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '41.1'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '71'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'June 2, 2008— January 16, 2009'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'A Favorita'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '197'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'João Emanuel Carneiro'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Ricardo Waddington'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '39.5'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '72'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'January 19, 2009— September 11, 2009'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Caminho das Índias'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Glória Perez'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Marcos Schechtman'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '38.8'}],
[{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '73'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'September 14, 2009— May 14, 2010'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Viver a Vida'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Manoel Carlos'},
{'column_span': 1,
'is_header': False,
'row_span': 1,
'value': 'Jayme Monjardim'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '35.6'}]],
'table_page_title': 'List of 8/9 PM telenovelas of Rede Globo',
'table_section_text': '',
'table_section_title': '2000s',
'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_8/9_PM_telenovelas_of_Rede_Globo'}
```
Please note that in test set sentence annotations are not available and thus values inside `sentence_annotations` can be safely ignored.
### Data Fields
- `table_webpage_url` (`str`): Table webpage URL.
- `table_page_title` (`str`): Table metadata with context about the table.
- `table_section_title` (`str`): Table metadata with context about the table.
- `table_section_text` (`str`): Table metadata with context about the table.
- `table` (`List[List[Dict]]`): The outer lists represents rows and the inner lists columns. Each Dict has the fields:
- `column_span` (`int`)
- `is_header` (`bool`)
- `row_span` (`int`)
- `value` (`str`)
- `highlighted_cells` (`List[[row_index, column_index]]`): Where each `[row_index, column_index]` pair indicates that `table[row_index][column_index]` is highlighted.
- `example_id` (`int`): A unique id for this example.
- `sentence_annotations`: Consists of the `original_sentence` and the sequence of revised sentences performed in order to produce the `final_sentence`.
### Data Splits
```
DatasetDict({
train: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 120761
})
validation: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 7700
})
test: Dataset({
features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],
num_rows: 7700
})
})
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{parikh2020totto,
title={{ToTTo}: A Controlled Table-To-Text Generation Dataset},
author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of EMNLP},
year={2020}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
0n1xus/codexglue | 2021-11-18T08:45:46.000Z | [
"region:us"
] | 0n1xus | CodeXGLUE is a benchmark dataset to foster machine learning research for program understanding and generation.
CodeXGLUE includes a collection of 10 tasks across 14 datasets and a platform for model evaluation and comparison. | @article{Lu2021,
author = {Lu, Shuai and Guo, Daya and Ren, Shuo and Huang, Junjie and Svyatkovskiy, Alexey and Blanco, Ambrosio and Clement, Colin B. and Drain, Dawn and Jiang, Daxin and Tang, Duyu and Li, Ge and Zhou, Lidong and Shou, Linjun and Zhou, Long and Tufano, Michele and Gong, Ming and Zhou, Ming and Duan, Nan and Sundaresan, Neel and Deng, Shao Kun and Fu, Shengyu and Liu, Shujie},
year = {2021},
booktitle = {arXiv},
title = {CodeXGLUE - A Machine Learning Benchmark Dataset for Code Understanding and Generation}
} | null | 3 | 1,611 | Entry not found |
shariqfarooq/cs323_densepred_depth | 2023-09-16T00:02:26.000Z | [
"region:us"
] | shariqfarooq | null | null | null | 0 | 1,604 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: depth
dtype: image
splits:
- name: train
num_bytes: 651397023.7943412
num_examples: 25356
- name: test
num_bytes: 13440344.421658808
num_examples: 518
download_size: 343390111
dataset_size: 664837368.216
---
# Dataset Card for "cs323_densepred_depth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
UBC-NLP/orca | 2023-07-17T23:02:07.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"language:ara",
"Arabic",
"NLU Benchmark",
"Natural Language Inference (NLI)",
"Question Answering (QA)",
"Semantic Textual Similarity and and Paraphrase (STSP)",
"Sentence Classification (SC)",
"Structure Predictions (SP)",
"Topic Classification (TC)",
"Word Sense Disambiguation (WSD)",
"arxiv:2212.10758",
"arxiv:2004.01401",
"region:us"
] | UBC-NLP | null | null | null | 3 | 1,603 |
---
viewer: false
language:
- ara
tags:
- Arabic
- NLU Benchmark
- Natural Language Inference (NLI)
- Question Answering (QA)
- Semantic Textual Similarity and and Paraphrase (STSP)
- Sentence Classification (SC)
- Structure Predictions (SP)
- Topic Classification (TC)
- Word Sense Disambiguation (WSD)
task_categories:
- text-classification
- token-classification
- question-answering
extra_gated_fields:
Name: text
Email: text
Affilation: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
I agree to cite the ORCA paper and all original papers: checkbox
---
<p align="center">
<br>
<img src="https://orca.dlnlp.ai/assets/orca_logo.png" width="55%"/>
<br>
<p>
<p align="center">
<!-- <a href="https://github.com/UBC-NLP/orca/releases"> -->
<!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/orca.svg"> </a>-->
<a href="https://orca.dlnlp.ai/">
<img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://orca.dlnlp.ai">
</a>
<!-- <a href="https://github.com/UBC-NLP/orca/blob/main/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/UBC-NLP/orca?logoColor=blue"></a> -->
<!-- <a href='https://orca.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/orca/badge/?version=latest' alt='Documentation Status' /></a> -->
<!-- <a href="https://github.com/UBC-NLP/orca/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/UBC-NLP/orca"></a>
<!-- <a href="https://github.com/UBC-NLP/orca/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/UBC-NLP/orca"></a> -->
</p>
In this work, we introduce [**ORCA**](https://arxiv.org/abs/2212.10758), a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models.
# ORCA Task Cluster
We arrange [**ORCA**](https://arxiv.org/abs/2212.10758), into seven NLU task clusters. These are (1) sentence classification, (2) structured prediction (3) semantic textual similarity and paraphrase, (4) text classification, (5) natural language inference, (6) word sense disambiguation, and (7) question answering.
### (1) Natural Language Inference (NLI)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[ANS Stance](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Baly Stance](https://aclanthology.org/N18-2004/) |MSA | Macro F1 | [(Balyet al., 2018)](https://aclanthology.org/N18-2004/) |
|[XLNI](https://github.com/facebookresearch/XNLI) |MSA | Macro F1 | [(Conneau et al., 2018)](https://github.com/facebookresearch/XNLI)|
### (2) Question Answering (QA)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[Question Answering](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abdul-Mageed et al., 2020a)](https://aclanthology.org/2021.acl-long.551/) |
### (3) Semantic Textual Similarity and Paraphrase (STSP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Emotion Regression](https://aclanthology.org/S18-1001/) |MSA | Spearman Correlation| [(Saif et al., 2018)](https://aclanthology.org/S18-1001/) |
|[MQ2Q](https://aclanthology.org/2019.nsurl-1.1) |MSA | Macro F1 | [(Seelawi al., 2019)](https://aclanthology.org/2019.nsurl-1.1) |
|[STS](https://aclanthology.org/S17-2001/) |MSA | Macro F1 | [(Cer et al., 2017)](https://aclanthology.org/S17-2001/) |
### (4) Sentence Classification (SC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Abusive](https://aclanthology.org/W19-3512/) |DA | Macro F1 | [(Mulki et al., 2019)](https://aclanthology.org/W19-3512/) |
|[Adult](https://aclanthology.org/2021.wanlp-1.14) |DA | Macro F1 | [(Mubarak et al., 2021)](https://aclanthology.org/2021.wanlp-1.14) |
|[Age](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[ANS Claim](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Dangerous ](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Alshehri et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.6)|
|[Dialect Binary](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Country](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Region](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Emotion](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Gender](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Hate Speech](https://www.aclweb.org/anthology/2020.osact-1.7) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Irony](https://dl.acm.org/doi/10.1145/3368567.3368585) |DA | Macro F1 | [(Ghanem al., 2019)](https://dl.acm.org/doi/10.1145/3368567.3368585) |
|[Machine Generation](https://aclanthology.org/2020.wanlp-1.7/) |MSA | Macro F1 | [(Nagoudi et al., 2020)](https://aclanthology.org/2020.wanlp-1.7/) |
|[Offensive](https://aclanthology.org/2020.osact-1.8/) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Sarcasm](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Farha and Magdy, 2020)](https://aclanthology.org/2020.osact-1.5/) |
|[Sentiment Analysis](https://aclanthology.org/2021.acl-long.551/) |DA | Macro F1 | [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/) |
### (5) Structure Predictions (SP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Aqmar NER](https://www.cs.cmu.edu/~ark/ArabicNER/) |MSA | Macro F1 | [(Mohit, 2012)](https://www.cs.cmu.edu/~ark/ArabicNER/) |
|[Arabic NER Corpus](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |MSA | Macro F1 | [(Benajiba and Rosso, 2007)](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |
|[Dialect Part Of Speech](https://aclanthology.org/L18-1015.pdf) |DA | Macro F1 | [(Darwish et al., 2018)](https://aclanthology.org/L18-1015.pdf) |
|[MSA Part Of Speech](https://arxiv.org/abs/2004.01401) |MSA | Macro F1 | [(Liang et al., 2020)](https://arxiv.org/abs/2004.01401) |
### (6) Topic Classification (TC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Topic](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abbas et al.,2011)](https://www.dline.info/fpaper/jdim/v9i5/1.pdf), [(Chouigui et al.,2017)](https://www.researchgate.net/publication/320871871_Poster_ANT_Corpus_An_Arabic_News_Text_Collection_for_Textual_Classification), [(Saad, 2010)](http://site.iugaza.edu.ps/wp-content/uploads/mksaad-OSAC-OpenSourceArabicCorpora-EECS10-rev9(1).pdf). |
### (7) Word Sense Disambiguation (WSD)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Word Sense Disambiguation](https://www.mdpi.com/2076-3417/11/6/2567) |MSA | Macro F1 | [(El-Razzaz, 2021)](https://www.mdpi.com/2076-3417/11/6/2567) |
# How to Use ORCA
### Request Access ###
To obtain access to the ORCA benchmark on Huggingface, follow the following steps:
- Login on your Haggingface account
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request1.png" width="70%"/>
- Request access
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request2.png" width="70%"/>
### Install Requirments
```shell
pip install datasets transformers seqeval
```
### Login with your Huggingface CLI ###
You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens).
```shell
export HUGGINGFACE_TOKEN=""
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
### Fine-tuning a model on ORCA tasks
We provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href="https://colab.research.google.com/github/UBC-NLP/orca/blob/main/Finetuning_ORCA.ipynb"><img alt="colab" src="https://colab.research.google.com/assets/colab-badge.svg">
### Submitting your results on ORCA test
We design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using **ORCA** leaderboard: **[https://orca.dlnlp.ai](https://orca.dlnlp.ai/index_main.php)**
---
## Citation
If you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```
@inproceedings{elmadany-etal-2023-orca,
title = "{ORCA}: A Challenging Benchmark for {A}rabic Language Understanding",
author = "Elmadany, AbdelRahim and
Nagoudi, ElMoatez Billah and
Abdul-Mageed, Muhammad",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.609",
pages = "9559--9586",
}
```
---
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
bigbio/bc5cdr | 2022-12-22T15:43:20.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles. | @article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Jiao Li and
Yueping Sun and
Robin J. Johnson and
Daniela Sciaky and
Chih{-}Hsuan Wei and
Robert Leaman and
Allan Peter Davis and
Carolyn J. Mattingly and
Thomas C. Wiegers and
Zhiyong Lu},
title = {BioCreative {V} {CDR} task corpus: a resource for chemical disease
relation extraction},
journal = {Database J. Biol. Databases Curation},
volume = {2016},
year = {2016},
url = {https://doi.org/10.1093/database/baw068},
doi = {10.1093/database/baw068},
timestamp = {Thu, 13 Aug 2020 12:41:41 +0200},
biburl = {https://dblp.org/rec/journals/biodb/LiSJSWLDMWL16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 1 | 1,601 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: PUBLIC_DOMAIN_MARK_1p0
pretty_name: BC5CDR
homepage: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for BC5CDR
## Dataset Description
- **Homepage:** http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles.
## Citation Information
```
@article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Jiao Li and
Yueping Sun and
Robin J. Johnson and
Daniela Sciaky and
Chih{-}Hsuan Wei and
Robert Leaman and
Allan Peter Davis and
Carolyn J. Mattingly and
Thomas C. Wiegers and
Zhiyong Lu},
title = {BioCreative {V} {CDR} task corpus: a resource for chemical disease
relation extraction},
journal = {Database J. Biol. Databases Curation},
volume = {2016},
year = {2016},
url = {https://doi.org/10.1093/database/baw068},
doi = {10.1093/database/baw068},
timestamp = {Thu, 13 Aug 2020 12:41:41 +0200},
biburl = {https://dblp.org/rec/journals/biodb/LiSJSWLDMWL16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
poem_sentiment | 2023-01-25T14:42:40.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2011.02686",
"region:us"
] | null | Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg. This dataset can be used for tasks such as sentiment classification or style transfer for poems. | @misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 8 | 1,599 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: gutenberg-poem-dataset
pretty_name: Gutenberg Poem Dataset
dataset_info:
features:
- name: id
dtype: int32
- name: verse_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
'2': no_impact
splits:
- name: train
num_bytes: 48555
num_examples: 892
- name: validation
num_bytes: 5788
num_examples: 105
- name: test
num_bytes: 5588
num_examples: 104
download_size: 49870
dataset_size: 59931
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
verse_text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for Gutenberg Poem Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/google-research-datasets/poem-sentiment)
- **Paper:** [Investigating Societal Biases in a Poetry Composition System](https://arxiv.org/abs/2011.02686)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg.
This dataset can be used for tasks such as sentiment classification or style transfer for poems.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'id': 0, 'label': 2, 'verse_text': 'with pale blue berries. in these peaceful shades--'}```
### Data Fields
- `id`: index of the example
- `verse_text`: The text of the poem verse
- `label`: The sentiment label. Here
- 0 = negative
- 1 = positive
- 2 = no impact
- 3 = mixed (both negative and positive)
> Note: The original dataset uses different label indices (negative = -1, no impact = 0, positive = 1)
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 892 | 105 | 104 |
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
baber/mmlu | 2023-09-29T02:12:59.000Z | [
"region:us"
] | baber | This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more. | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} | null | 0 | 1,590 | Entry not found |
laion/laion-high-resolution | 2022-05-07T12:11:38.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 41 | 1,586 | ---
license: cc-by-4.0
---
Laion high resolution is a >= 1024x1024 subset of laion5B. It has 170M samples
A good use case is to train a superresolution model.
Refer to [img2dataset guide](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion-high-resolution.md) for downloading |
frutiemax/rct_dataset | 2023-10-01T19:24:11.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"language:en",
"license:openrail",
"pixel art",
"region:us"
] | frutiemax | null | null | null | 0 | 1,585 | ---
language:
- en
license: openrail
size_categories:
- n<1K
task_categories:
- text-to-image
pretty_name: Rollercoaster Tycoon Dataset
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: object_type
dtype: string
- name: object_description
dtype: string
- name: view
dtype: int64
- name: color1
dtype: string
- name: color2
dtype: string
- name: color3
dtype: string
splits:
- name: train
num_bytes: 1477746.0
num_examples: 488
download_size: 1325670
dataset_size: 1477746.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- pixel art
---
|
fever | 2023-04-05T10:06:17.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"knowledge-verification",
"region:us"
] | null | null | null | null | 7 | 1,584 | ---
language:
- en
paperswithcode_id: fever
annotations_creators:
- crowdsourced
language_creators:
- found
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
pretty_name: FEVER
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
tags:
- knowledge-verification
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: train
num_bytes: 29591412
num_examples: 311431
- name: labelled_dev
num_bytes: 3643157
num_examples: 37566
- name: unlabelled_dev
num_bytes: 1548965
num_examples: 19998
- name: unlabelled_test
num_bytes: 1617002
num_examples: 19998
- name: paper_dev
num_bytes: 1821489
num_examples: 18999
- name: paper_test
num_bytes: 1821668
num_examples: 18567
download_size: 44853972
dataset_size: 40043693
- config_name: v2.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: validation
num_bytes: 306243
num_examples: 2384
download_size: 392466
dataset_size: 306243
- config_name: wiki_pages
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: wikipedia_pages
num_bytes: 7254115038
num_examples: 5416537
download_size: 1713485474
dataset_size: 7254115038
---
# Dataset Card for "fever"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fever.ai/](https://fever.ai/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
- FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
sentence(s) forming the necessary evidence for their judgment.
- FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
annotation guidelines requirements).
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 44.86 MB
- **Size of the generated dataset:** 40.05 MB
- **Total amount of disk used:** 84.89 MB
An example of 'train' looks as follows.
```
'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
'label': 'SUPPORTS',
'id': 75397,
'evidence_id': 104971,
'evidence_sentence_id': 7,
'evidence_annotation_id': 92206}
```
#### v2.0
- **Size of downloaded dataset files:** 0.39 MB
- **Size of the generated dataset:** 0.30 MB
- **Total amount of disk used:** 0.70 MB
An example of 'validation' looks as follows.
```
{'claim': "There is a convicted statutory rapist called Chinatown's writer.",
'evidence_wiki_url': '',
'label': 'NOT ENOUGH INFO',
'id': 500000,
'evidence_id': -1,
'evidence_sentence_id': -1,
'evidence_annotation_id': 269158}
```
#### wiki_pages
- **Size of downloaded dataset files:** 1.71 GB
- **Size of the generated dataset:** 7.25 GB
- **Total amount of disk used:** 8.97 GB
An example of 'wikipedia_pages' looks as follows.
```
{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
'id': '1928_in_association_football'}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### v2.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### wiki_pages
- `id`: a `string` feature.
- `text`: a `string` feature.
- `lines`: a `string` feature.
### Data Splits
#### v1.0
| | train | unlabelled_dev | labelled_dev | paper_dev | unlabelled_test | paper_test |
|------|-------:|---------------:|-------------:|----------:|----------------:|-----------:|
| v1.0 | 311431 | 19998 | 37566 | 18999 | 19998 | 18567 |
#### v2.0
| | validation |
|------|-----------:|
| v2.0 | 2384 |
#### wiki_pages
| | wikipedia_pages |
|------------|----------------:|
| wiki_pages | 5416537 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
FEVER license:
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use "FEVER Dataset", please cite:
```bibtex
@inproceedings{Thorne18Fever,
author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
booktitle = {NAACL-HLT},
year = {2018}
}
```
If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
```bibtex
@inproceedings{Thorne19FEVER2,
author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
title = {The {FEVER2.0} Shared Task},
booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
year = {2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
[@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
[@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
GEM/wiki_lingua | 2023-02-16T09:23:29.000Z | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-nc-sa-3.0",
"region:us"
] | GEM | WikiLingua is a large-scale multilingual dataset for the evaluation of
crosslingual abstractive summarization systems. The dataset includes ~770k
article and summary pairs in 18 languages from WikiHow. The gold-standard
article-summary alignments across languages was done by aligning the images
that are used to describe each how-to step in an article. | @article{ladhak-wiki-2020,
title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
journal = {arXiv preprint arXiv:2010.03093},
year = {2020},
url = {https://arxiv.org/abs/2010.03093}
} | null | 36 | 1,584 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: wiki_lingua
---
# Dataset Card for GEM/wiki_lingua
## Dataset Description
- **Homepage:** None (See Repository)
- **Repository:** https://github.com/esdurmus/Wikilingua
- **Paper:** https://www.aclweb.org/anthology/2020.findings-emnlp.360/
- **Leaderboard:** N/A
- **Point of Contact:** Faisal Ladhak, Esin Durmus
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_lingua).
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
faisal@cs.columbia.edu, esdurmus@stanford.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-3.0: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
DFKI-SLT/brat | 2023-05-10T15:38:03.000Z | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"region:us"
] | DFKI-SLT | null | null | null | 2 | 1,580 | ---
annotations_creators:
- expert-generated
language_creators:
- found
license: []
task_categories:
- token-classification
task_ids:
- parsing
---
# Information Card for Brat
## Table of Contents
- [Description](#description)
- [Summary](#summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Description
- **Homepage:** https://brat.nlplab.org
- **Paper:** https://aclanthology.org/E12-2021/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Summary
Brat is an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annota- tion for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. brat is designed in particular for structured annotation, where the notes are not free form text but have a fixed form that can be automatically processed and interpreted by a computer.
## Dataset Structure
Dataset annotated with brat format is processed using this script. Annotations created in brat are stored on disk in a standoff format: annotations are stored separately from the annotated document text, which is never modified by the tool. For each text document in the system, there is a corresponding annotation file. The two are associatied by the file naming convention that their base name (file name without suffix) is the same: for example, the file DOC-1000.ann contains annotations for the file DOC-1000.txt. More information can be found [here](https://brat.nlplab.org/standoff.html).
### Data Instances
[Needs More Information]
### Data Fields
```
-context: html content of data file as string
-file_name: a string name of file
-spans: a sequence containing id, type, location and text of a span
-relations: a sequence containing id, type and arguments of a relation
-equivalence_relations:
-events:
-attributions:
-normalizations:
-notes:
```
### Usage
brat script can be used by calling `load_dataset()` method and passing `kwargs` (arguments to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/builder_classes#datasets.BuilderConfig)) which should include at least `url` of the dataset prepared using brat. We provide an example of [SciArg](https://aclanthology.org/W18-5206.pdf) dataset below,
```python
from datasets import load_dataset
kwargs = {
"description" :
"""This dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing
fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific
publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of
scientific writing.""",
"citation" :
"""@inproceedings{lauscher2018b,
title = {An argument-annotated corpus of scientific publications},
booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},
publisher = {Association for Computational Linguistics},
author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo},
address = {Brussels, Belgium},
year = {2018},
pages = {40–46}
}""",
"homepage": "https://github.com/anlausch/ArguminSci",
"url": "http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip",
"file_name_blacklist": ['A28'],
}
dataset = load_dataset('dfki-nlp/brat', **kwargs)
```
## Additional Information
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{stenetorp-etal-2012-brat,
title = "brat: a Web-based Tool for {NLP}-Assisted Text Annotation",
author = "Stenetorp, Pontus and
Pyysalo, Sampo and
Topi{\'c}, Goran and
Ohta, Tomoko and
Ananiadou, Sophia and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the Demonstrations at the 13th Conference of the {E}uropean Chapter of the Association for Computational Linguistics",
month = apr,
year = "2012",
address = "Avignon, France",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/E12-2021",
pages = "102--107",
}
``` |
quora | 2023-04-05T13:37:24.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | null | null | null | 9 | 1,571 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Quora Question Pairs
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
dataset_info:
features:
- name: questions
sequence:
- name: id
dtype: int32
- name: text
dtype: string
- name: is_duplicate
dtype: bool
splits:
- name: train
num_bytes: 58155622
num_examples: 404290
download_size: 58176133
dataset_size: 58155622
---
# Dataset Card for "quora"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 58.17 MB
- **Size of the generated dataset:** 58.15 MB
- **Total amount of disk used:** 116.33 MB
### Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 58.17 MB
- **Size of the generated dataset:** 58.15 MB
- **Total amount of disk used:** 116.33 MB
An example of 'train' looks as follows.
```
{
"is_duplicate": true,
"questions": {
"id": [1, 2],
"text": ["Is this a sample question?", "Is this an example question?"]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `questions`: a dictionary feature containing:
- `id`: a `int32` feature.
- `text`: a `string` feature.
- `is_duplicate`: a `bool` feature.
### Data Splits
| name |train |
|-------|-----:|
|default|404290|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Unknown license.
### Citation Information
Unknown.
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset. |
coqa | 2023-04-05T10:02:34.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|race",
"source_datasets:extended|cnn_dailymail",
"source_datasets:extended|wikipedia",
"source_datasets:extended|other",
"language:en",
"license:other",
"conversational-qa",
"arxiv:1808.07042",
"arxiv:1704.04683",
"arxiv:1506.03340",
"region:us"
] | null | CoQA: A Conversational Question Answering Challenge | @article{reddy-etal-2019-coqa,
title = "{C}o{QA}: A Conversational Question Answering Challenge",
author = "Reddy, Siva and
Chen, Danqi and
Manning, Christopher D.",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1016",
doi = "10.1162/tacl_a_00266",
pages = "249--266",
} | null | 25 | 1,564 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: 'CoQA: Conversational Question Answering Challenge'
size_categories:
- 1K<n<10K
source_datasets:
- extended|race
- extended|cnn_dailymail
- extended|wikipedia
- extended|other
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: coqa
tags:
- conversational-qa
dataset_info:
features:
- name: source
dtype: string
- name: story
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
- name: input_text
dtype: string
- name: answer_start
dtype: int32
- name: answer_end
dtype: int32
splits:
- name: train
num_bytes: 17981459
num_examples: 7199
- name: validation
num_bytes: 1225518
num_examples: 500
download_size: 58092681
dataset_size: 19206977
---
# Dataset Card for "coqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stanfordnlp.github.io/coqa/](https://stanfordnlp.github.io/coqa/)
- **Repository:** https://github.com/stanfordnlp/coqa-baselines
- **Paper:** [CoQA: A Conversational Question Answering Challenge](https://arxiv.org/abs/1808.07042)
- **Point of Contact:** [Google Group](https://groups.google.com/forum/#!forum/coqa), [Siva Reddy](mailto:siva.reddy@mila.quebec), [Danqi Chen](mailto:danqic@cs.princeton.edu)
- **Size of downloaded dataset files:** 58.09 MB
- **Size of the generated dataset:** 19.24 MB
- **Total amount of disk used:** 77.33 MB
### Dataset Summary
CoQA is a large-scale dataset for building Conversational Question Answering systems.
Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 58.09 MB
- **Size of the generated dataset:** 19.24 MB
- **Total amount of disk used:** 77.33 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": "{\"answer_end\": [179, 494, 511, 545, 879, 1127, 1128, 94, 150, 412, 1009, 1046, 643, -1, 764, 724, 125, 1384, 881, 910], \"answer_...",
"questions": "[\"When was the Vat formally opened?\", \"what is the library for?\", \"for what subjects?\", \"and?\", \"what was started in 2014?\", \"ho...",
"source": "wikipedia",
"story": "\"The Vatican Apostolic Library (), more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, l..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `source`: a `string` feature.
- `story`: a `string` feature.
- `questions`: a `list` of `string` features.
- `answers`: a dictionary feature containing:
- `input_text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_end`: a `int32` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default| 7199| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
CoQA contains passages from seven domains. We make five of these public under the following licenses:
- Literature and Wikipedia passages are shared under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license.
- Children's stories are collected from [MCTest](https://www.microsoft.com/en-us/research/publication/mctest-challenge-dataset-open-domain-machine-comprehension-text/) which comes with [MSR-LA](https://github.com/mcobzarenco/mctest/blob/master/data/MCTest/LICENSE.pdf) license.
- Middle/High school exam passages are collected from [RACE](https://arxiv.org/abs/1704.04683) which comes with its [own](http://www.cs.cmu.edu/~glai1/data/race/) license.
- News passages are collected from the [DeepMind CNN dataset](https://arxiv.org/abs/1506.03340) which comes with [Apache](https://github.com/deepmind/rc-data/blob/master/LICENSE) license.
### Citation Information
```
@article{reddy-etal-2019-coqa,
title = "{C}o{QA}: A Conversational Question Answering Challenge",
author = "Reddy, Siva and
Chen, Danqi and
Manning, Christopher D.",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1016",
doi = "10.1162/tacl_a_00266",
pages = "249--266",
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@ojasaar](https://github.com/ojasaar), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
pccl-org/formal-logic-simple-order-simple-objects-blivergent-500 | 2023-09-21T20:20:02.000Z | [
"region:us"
] | pccl-org | null | null | null | 0 | 1,561 | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 19635650
num_examples: 124750
download_size: 3888871
dataset_size: 19635650
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "formal-logic-simple-order-simple-objects-blivergent-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alzoubi36/policy_ie_b | 2023-06-25T07:13:15.000Z | [
"region:us"
] | alzoubi36 | null | null | null | 0 | 1,555 | ---
dataset_info:
features:
- name: type-I
struct:
- name: subtask
dtype: string
- name: tags
sequence: string
- name: tokens
sequence: string
- name: type-II
struct:
- name: subtask
dtype: string
- name: tags
sequence: string
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 3944744
num_examples: 4109
- name: validation
num_bytes: 1102169
num_examples: 1041
- name: test
num_bytes: 1102169
num_examples: 1041
download_size: 814098
dataset_size: 6149082
---
# Dataset for the PolicyIE-B task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
|
Hello-SimpleAI/HC3 | 2023-01-21T13:10:10.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
] | Hello-SimpleAI | Human ChatGPT Comparison Corpus (HC3) | \ | null | 115 | 1,552 | ---
task_categories:
- text-classification
- question-answering
- sentence-similarity
- zero-shot-classification
language:
- en
- zh
tags:
- ChatGPT
- SimpleAI
- Detection
- OOD
size_categories:
- 10K<n<100K
license: cc-by-sa-4.0
---
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` |
mteb/amazon_counterfactual | 2022-09-27T19:10:37.000Z | [
"language:de",
"language:en",
"language:ja",
"arxiv:2104.06893",
"region:us"
] | mteb | The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false). | @misc{oneill2021i,
title={I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews},
author={James O'Neill and Polina Rozenshtein and Ryuichi Kiryo and Motoko Kubota and Danushka Bollegala},
year={2021},
eprint={2104.06893},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 1,549 | ---
language:
- de
- en
- ja
---
# Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
## Usage
You can load each of the languages as follows:
```
from datasets import get_dataset_config_names
dataset_id = "SetFit/amazon_counterfactual"
# Returns ['de', 'en', 'en-ext', 'ja']
configs = get_dataset_config_names(dataset_id)
# Load English subset
dset = load_dataset(dataset_id, name="en")
``` |
mstz/adult | 2023-04-15T11:37:47.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"adult",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @inproceedings{DBLP:conf/kdd/Kohavi96,
author = {Ron Kohavi},
editor = {Evangelos Simoudis and
Jiawei Han and
Usama M. Fayyad},
title = {Scaling Up the Accuracy of Naive-Bayes Classifiers: {A} Decision-Tree
Hybrid},
booktitle = {Proceedings of the Second International Conference on Knowledge Discovery
and Data Mining (KDD-96), Portland, Oregon, {USA}},
pages = {202--207},
publisher = {{AAAI} Press},
year = {1996},
url = {http://www.aaai.org/Library/KDD/1996/kdd96-033.php},
timestamp = {Mon, 05 Jun 2017 13:20:21 +0200},
biburl = {https://dblp.org/rec/conf/kdd/Kohavi96.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 1,549 | ---
language:
- en
tags:
- adult
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Adult
size_categories:
- 10K<n<100K
task_categories:
- tabular-classification
configs:
- encoding
- income
- income-no race
- race
license: cc
---
# Adult
The [Adult dataset](https://archive.ics.uci.edu/ml/datasets/Adult) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| income | Binary classification | Classify the person's income as over or under the threshold. |
| income-no race | Binary classification | As `income`, but the `race` feature is removed. |
| race | Multiclass classification | Predict the race of the individual. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/adult", "income")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset.
|**Feature** |**Type** | **Description** |
|-------------------------------|-----------|------------------------------------------------------------|
|`age` |`[int64]` | Age of the person. |
|`capital_gain` |`[float64]`| Capital gained by the person. |
|`capital_loss` |`[float64]`| Capital lost by the person. |
|`education` |`[int8]` | Education level: the higher, the more educated the person. |
|`final_weight` |`[int64]` | |
|`hours_worked_per_week` |`[int64]` | Hours worked per week. |
|`marital_status` |`[string]` | Marital status of the person. |
|`native_country` |`[string]` | Native country of the person. |
|`occupation` |`[string]` | Job of the person. |
|`race` |`[string]` | Race of the person. |
|`relationship` |`[string]` | |
|`is_male` |`[bool]` | Man/Woman. |
|`workclass` |`[string]` | Type of job of the person. |
|**over_threshold** |`int8` | `1` for income `>= 50k$`, `0` otherwise. | |
cyrilzhang/TinyStories2-ascii-bpe-2k | 2023-09-22T23:24:28.000Z | [
"region:us"
] | cyrilzhang | null | null | null | 0 | 1,536 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2369808200
num_examples: 578002
- name: validation
num_bytes: 23866100
num_examples: 5821
download_size: 827963790
dataset_size: 2393674300
---
# Dataset Card for "TinyStories2-ascii-bpe-2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jacob-hugging-face/job-descriptions | 2023-08-18T20:07:48.000Z | [
"license:llama2",
"region:us"
] | jacob-hugging-face | null | null | null | 4 | 1,533 | ---
license: llama2
---
|
mteb/tweet_sentiment_extraction | 2022-09-27T19:14:27.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 9 | 1,524 | ---
language:
- en
--- |
bilgeyucel/seven-wonders | 2023-03-09T14:25:43.000Z | [
"size_categories:n<1K",
"language:en",
"region:us"
] | bilgeyucel | null | null | null | 0 | 1,523 | ---
language:
- en
size_categories:
- n<1K
--- |
nielsr/breast-cancer | 2023-05-01T18:38:43.000Z | [
"region:us"
] | nielsr | null | null | null | 5 | 1,518 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 42431652.0
num_examples: 130
download_size: 0
dataset_size: 42431652.0
---
# Dataset Card for "breast-cancer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
graphs-datasets/MUTAG | 2023-02-07T16:39:19.000Z | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | graphs-datasets | null | null | null | 3 | 1,516 | ---
license: unknown
task_categories:
- graph-ml
---
# Dataset Card for MUTAG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://pubs.acs.org/doi/abs/10.1021/jm00106a046)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/MUTAG.zip):**:
- **Paper:**: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-mutag)
### Dataset Summary
The `MUTAG` dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
`MUTAG` should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 187 |
| average #nodes | 18.03 |
| average #edges | 39.80 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="MUTAG")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Citation Information
```
@article{doi:10.1021/jm00106a046,
author = {Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin},
title = {Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity},
journal = {Journal of Medicinal Chemistry},
volume = {34},
number = {2},
pages = {786-797},
year = {1991},
doi = {10.1021/jm00106a046},
URL = {
https://doi.org/10.1021/jm00106a046
},
eprint = {
https://doi.org/10.1021/jm00106a046
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
yitingxie/rlhf-reward-datasets | 2023-01-01T12:23:04.000Z | [
"region:us"
] | yitingxie | null | null | null | 44 | 1,502 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: test
num_bytes: 6093563
num_examples: 5103
- name: train
num_bytes: 90528217
num_examples: 76256
download_size: 57138483
dataset_size: 96621780
---
# Dataset Card for "rlhf-reward-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emozilla/pg_books-tokenized-bos-eos-chunked-65536 | 2023-10-07T02:19:15.000Z | [
"region:us"
] | emozilla | null | null | null | 3 | 1,499 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 67744337720
num_examples: 79514
download_size: 1125510240
dataset_size: 67744337720
---
# Dataset Card for "pg_books-tokenized-bos-eos-chunked-65536"
The [pg19](https://huggingface.co/datasets/emozilla/pg19) dataset tokenized under LLaMA into 64k chunks, bookended with BOS and EOS |
argilla/banking_sentiment_setfit | 2022-12-07T09:08:25.000Z | [
"region:us"
] | argilla | null | null | null | 1 | 1,495 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
splits:
- name: train
num_bytes: 7433.25
num_examples: 108
- name: test
num_bytes: 2477.75
num_examples: 36
download_size: 8087
dataset_size: 9911.0
---
# Dataset Card for "banking_sentiment_setfit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-book/JGLUE | 2023-10-06T00:58:24.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ja",
"license:cc-by-4.0",
"MARC",
"STS",
"NLI",
"SQuAD",
"CommonsenseQA",
"region:us"
] | llm-book | JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. JGLUE has been constructed from scratch without translation. We hope that JGLUE will facilitate NLU research in Japanese. | @inproceedings{kurihara-etal-2022-jglue,
title = "{JGLUE}: {J}apanese General Language Understanding Evaluation",
author = "Kurihara, Kentaro and
Kawahara, Daisuke and
Shibata, Tomohide",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.317",
pages = "2957--2966",
abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.",
}
@InProceedings{Kurihara_nlp2022,
author = "栗原健太郎 and 河原大輔 and 柴田知秀",
title = "JGLUE: 日本語言語理解ベンチマーク",
booktitle = "言語処理学会第28回年次大会",
year = "2022",
url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf"
note= "in Japanese"
} | null | 3 | 1,495 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: JGLUE
size_categories: []
source_datasets:
- original
tags:
- MARC
- STS
- NLI
- SQuAD
- CommonsenseQA
task_categories:
- multiple-choice
- question-answering
- sentence-similarity
- text-classification
task_ids:
- multiple-choice-qa
- open-domain-qa
- multi-class-classification
- sentiment-classification
---
# Dataset Card for JGLUE
[](https://aclanthology.org/2022.lrec-1.317)
書籍『大規模言語モデル入門』で使用する、JGLUEのデータセットです。
[オリジナルのリポジトリ](https://github.com/yahoojapan/JGLUE)で公開されているデータセットを利用しています。
### Licence
コードのライセンスは Creative Commons Attribution-ShareAlike 4.0 International License です。
データそのもののライセンスは[配布元](https://github.com/yahoojapan/JGLUE)のライセンスに従ってください。
### Citation
```bibtex
@inproceedings{kurihara-etal-2022-jglue,
title = "{JGLUE}: {J}apanese General Language Understanding Evaluation",
author = "Kurihara, Kentaro and
Kawahara, Daisuke and
Shibata, Tomohide",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.317",
pages = "2957--2966",
abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.",
}
```
```bibtex
@InProceedings{Kurihara_nlp2022,
author = "栗原健太郎 and 河原大輔 and 柴田知秀",
title = "JGLUE: 日本語言語理解ベンチマーク",
booktitle = "言語処理学会第 28 回年次大会",
year = "2022",
url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf"
note= "in Japanese"
}
```
### Contributions
データセット作成者である [Kentaro Kurihara](https://twitter.com/kkurihara_cs), [Daisuke Kawahara](https://twitter.com/daisukekawahar1), [Tomohide Shibata](https://twitter.com/stomohide) に感謝を申し上げます。
また本リポジトリのコードは [Shunsuke Kitada](https://twitter.com/shunk031)の[こちらのリポジトリ](https://huggingface.co/datasets/shunk031/JGLUE)を基に作成されたものです。 |
jglaser/binding_affinity | 2022-03-12T00:29:11.000Z | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | jglaser | A dataset to fine-tune language models on protein-ligand binding affinity prediction. | @InProceedings{huggingface:dataset,
title = {jglaser/binding_affinity},
author={Jens Glaser, ORNL
},
year={2021}
} | null | 4 | 1,492 | ---
tags:
- molecules
- chemistry
- SMILES
---
## How to use the data sets
This dataset contains 1.9M unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities. It can be used for fine-tuning a language model.
The data comes from the following sources:
- BindingDB
- PDBbind-cn
- BioLIP
- BindingMOAD
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/binding_affinity",split='train[:90%]')
validation = load_dataset("jglaser/binding_affinity",split='train[90%:]')
```
Optionally, datasets with certain protein sequences removed are available.
These can be used to test the predictive power for specific proteins even when
these are not part of the training data.
- `train_no_kras` (no KRAS proteins)
**Loading the data manually**
The file `data/all.parquet` contains the preprocessed data. To extract it,
you need download and install [git LFS support] https://git-lfs.github.com/].
### Pre-process yourself
To manually perform the preprocessing, download the data sets from
1. BindingDB
In `bindingdb`, download the database as tab separated values
<https://bindingdb.org> > Download > BindingDB_All_2021m4.tsv.zip
and extract the zip archive into `bindingdb/data`
Run the steps in `bindingdb.ipynb`
2. PDBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
Perform the steps in the notebook `pdbbind.ipynb`
3. BindingMOAD
Go to <https://bindingmoad.org> and download the files `every.csv`
(All of Binding MOAD, Binding Data) and the non-redundant biounits
(`nr_bind.zip`). Place and extract those files into `binding_moad`.
Run the script `moad.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 moad.py`).
Perform the steps in the notebook `moad.ipynb`
4. BioLIP
Download from <https://zhanglab.ccmb.med.umich.edu/BioLiP/> the files
- receptor1.tar.bz2 (Receptor1, Non-redudant set)
- ligand_2013-03-6.tar.bz2 (Ligands)
- BioLiP.tar.bz2 (Annotations)
and extract them in `biolip/data`.
The following steps are **optional**, they **do not** result in additional binding affinity data.
Download the script
- download_all_sets.pl
from the Weekly update subpage.
Update the 2013 database to its current state
`perl download_all-sets.pl`
Run the script `biolip.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 biolip.py`).
Perform the steps in the notebook `biolip.ipynb`
5. Final concatenation and filtering
Run the steps in the notebook `combine_dbs.ipynb`
|
allenai/scitldr | 2023-01-25T14:43:42.000Z | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"scientific-documents-summarization",
"arxiv:2004.15011",
"region:us"
] | allenai | A new multi-target dataset of 5.4K TLDRs over 3.2K papers.
SCITLDR contains both author-written and expert-derived TLDRs,
where the latter are collected using a novel annotation protocol
that produces high-quality summaries while minimizing annotation burden. | @article{cachola2020tldr,
title={{TLDR}: Extreme Summarization of Scientific Documents},
author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld},
journal={arXiv:2004.15011},
year={2020},
} | null | 14 | 1,484 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: scitldr
pretty_name: SciTLDR
tags:
- scientific-documents-summarization
dataset_info:
- config_name: Abstract
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': non-oracle
'1': oracle
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: target
sequence: string
splits:
- name: train
num_bytes: 2738065
num_examples: 1992
- name: test
num_bytes: 1073656
num_examples: 618
- name: validation
num_bytes: 994876
num_examples: 619
download_size: 5483987
dataset_size: 4806597
- config_name: AIC
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': 0
'1': 1
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: ic
dtype: bool_
- name: target
sequence: string
splits:
- name: train
num_bytes: 14473822
num_examples: 1992
- name: test
num_bytes: 4822026
num_examples: 618
- name: validation
num_bytes: 4476237
num_examples: 619
download_size: 25545108
dataset_size: 23772085
- config_name: FullText
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': non-oracle
'1': oracle
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: target
sequence: string
splits:
- name: train
num_bytes: 66917363
num_examples: 1992
- name: test
num_bytes: 20182554
num_examples: 618
- name: validation
num_bytes: 18790651
num_examples: 619
download_size: 110904552
dataset_size: 105890568
---
# Dataset Card for SciTLDR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/allenai/scitldr
- **Repository:** https://github.com/allenai/scitldr
- **Paper:** https://arxiv.org/abs/2004.15011
- **Leaderboard:**
- **Point of Contact:** {isabelc,kylel,armanc,danw}@allenai.org
### Dataset Summary
`SciTLDR`: Extreme Summarization of Scientific Documents
SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.
### Supported Tasks and Leaderboards
summarization
### Languages
English
## Dataset Structure
SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows
```
{
"source":[
"sent0",
"sent1",
"sent2",
...
],
"source_labels":[binary list in which 1 is the oracle sentence],
"rouge_scores":[precomputed rouge-1 scores],
"paper_id":"PAPER-ID",
"target":[
"author-tldr",
"pr-tldr0",
"pr-tldr1",
...
],
"title":"TITLE"
}
```
The keys `rouge_scores` and `source_labels` are not necessary for any code to run, precomputed Rouge scores are provided for future research.
### Data Instances
{
"source": [
"Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.",
"MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.",
"Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.",
"We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.",
"We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.",
"We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point."
],
"source_labels": [
0,
0,
0,
1,
0,
0
],
"rouge_scores": [
0.2399999958000001,
0.26086956082230633,
0.19999999531250012,
0.38095237636054424,
0.2051282003944774,
0.2978723360796741
],
"paper_id": "rJlnfaNYvB",
"target": [
"We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.",
"Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.",
"The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically."
],
"title": "Adaptive Loss Scaling for Mixed Precision Training"
}
### Data Fields
- `source`: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.
- `source_labels`: Binary 0 or 1, 1 denotes the oracle sentence.
- `rouge_scores`: Precomputed ROUGE baseline scores for each sentence.
- `paper_id`: Arxiv Paper ID.
- `target`: Multiple summaries for each sentence, one sentence per line.
- `title`: Title of the paper.
### Data Splits
| | train | valid | test |
|-------------------|-------|--------|------|
| SciTLDR-A | 1992 | 618 | 619 |
| SciTLDR-AIC | 1992 | 618 | 619 |
| SciTLDR-FullText | 1992 | 618 | 619 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
https://allenai.org/
### Annotations
#### Annotation process
Given the title and first 128 words of a reviewer comment about a paper,
re-write the summary (if it exists) into a single sentence or an incomplete
phrase. Summaries must be no more than one sentence.
Most summaries are between 15 and 25 words. The average rewritten summary is
20 words long.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
To encourage further research in the area of extreme summarization of scientific documents.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache License 2.0
### Citation Information
@article{cachola2020tldr,
title={{TLDR}: Extreme Summarization of Scientific Documents},
author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld},
journal={arXiv:2004.15011},
year={2020},
}
### Contributions
Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset. |
THUDM/humaneval-x | 2022-10-25T06:08:38.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:apache-2.0",
"region:us"
] | THUDM | HumanEval-X is a benchmark for the evaluation of the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks. | null | null | 43 | 1,482 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: HumanEval-X
---
# HumanEval-X
## Dataset Description
[HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.
## Dataset Structure
To load the dataset you need to specify a subset among the 5 exiting languages `[python, cpp, go, java, js]`. By default `python` is loaded.
```python
from datasets import load_dataset
load_dataset("THUDM/humaneval-x", "js")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'],
num_rows: 164
})
})
```
```python
next(iter(data["test"]))
{'task_id': 'JavaScript/0',
'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n',
'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n',
'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n',
'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n',
'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'}
```
## Data Fields
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ``prompt``: the function declaration and docstring, used for code generation.
* ``declaration``: only the function declaration, used for code translation.
* ``canonical_solution``: human-crafted example solutions.
* ``test``: hidden test samples, used for evaluation.
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
## Citation Information
Refer to https://github.com/THUDM/CodeGeeX. |
Anthropic/llm_global_opinions | 2023-06-29T00:46:48.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2306.16388",
"region:us"
] | Anthropic | null | null | null | 22 | 1,481 | ---
license: cc-by-nc-sa-4.0
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for GlobalOpinionQA
## Dataset Summary
The data contains a subset of survey questions about global issues and opinions adapted from the [World Values Survey](https://www.worldvaluessurvey.org/) and [Pew Global Attitudes Survey](https://www.pewresearch.org/).
The data is further described in the paper: [Towards Measuring the Representation of Subjective Global Opinions in Language Models](https://arxiv.org/abs/2306.16388).
## Purpose
In our paper, we use this dataset to analyze the opinions that large language models (LLMs) reflect on complex global issues.
Our goal is to gain insights into potential biases in AI systems by evaluating their performance on subjective topics.
## Data Format
The data is in a CSV file with the following columns:
- question: The text of the survey question.
- selections: A dictionary where the key is the country name and the value is a list of percentages of respondents who selected each answer option for that country.
- options: A list of the answer options for the given question.
- source: GAS/WVS depending on whether the question is coming from Global Attitudes Survey or World Value Survey.
## Usage
```python
from datasets import load_dataset
# Loading the data
dataset = load_dataset("Anthropic/llm_global_opinions")
```
## Disclaimer
We recognize the limitations in using this dataset to evaluate LLMs, as they were not specifically
designed for this purpose. Therefore, we acknowledge that the construct validity of these datasets when applied to LLMs may be limited.
## Contact
For questions, you can email esin at anthropic dot com
## Citation
If you would like to cite our work or data, you may use the following bibtex citation:
```
@misc{durmus2023measuring,
title={Towards Measuring the Representation of Subjective Global Opinions in Language Models},
author={Esin Durmus and Karina Nyugen and Thomas I. Liao and Nicholas Schiefer and Amanda Askell and Anton Bakhtin and Carol Chen and Zac Hatfield-Dodds and Danny Hernandez and Nicholas Joseph and Liane Lovitt and Sam McCandlish and Orowa Sikder and Alex Tamkin and Janel Thamkul and Jared Kaplan and Jack Clark and Deep Ganguli},
year={2023},
eprint={2306.16388},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
open-llm-leaderboard/details_golaxy__gogpt-7b-bloom | 2023-09-17T07:35:31.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 1,467 | ---
pretty_name: Evaluation run of golaxy/gogpt-7b-bloom
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_golaxy__gogpt-7b-bloom\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T07:35:20.075381](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-7b-bloom/blob/main/results_2023-09-17T07-35-20.075381.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2214765100671141,\n\
\ \"em_stderr\": 0.004252451287967787,\n \"f1\": 0.25772336409395996,\n\
\ \"f1_stderr\": 0.00428261897007673,\n \"acc\": 0.31452249408050514,\n\
\ \"acc_stderr\": 0.006788199951115784\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.2214765100671141,\n \"em_stderr\": 0.004252451287967787,\n\
\ \"f1\": 0.25772336409395996,\n \"f1_stderr\": 0.00428261897007673\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6290449881610103,\n\
\ \"acc_stderr\": 0.013576399902231568\n }\n}\n```"
repo_url: https://huggingface.co/golaxy/gogpt-7b-bloom
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|arc:challenge|25_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T07_35_20.075381
path:
- '**/details_harness|drop|3_2023-09-17T07-35-20.075381.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T07-35-20.075381.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T07_35_20.075381
path:
- '**/details_harness|gsm8k|5_2023-09-17T07-35-20.075381.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T07-35-20.075381.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hellaswag|10_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T10:56:27.356745.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T10:56:27.356745.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T10:56:27.356745.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T07_35_20.075381
path:
- '**/details_harness|winogrande|5_2023-09-17T07-35-20.075381.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T07-35-20.075381.parquet'
- config_name: results
data_files:
- split: 2023_07_31T10_56_27.356745
path:
- results_2023-07-31T10:56:27.356745.parquet
- split: 2023_09_17T07_35_20.075381
path:
- results_2023-09-17T07-35-20.075381.parquet
- split: latest
path:
- results_2023-09-17T07-35-20.075381.parquet
---
# Dataset Card for Evaluation run of golaxy/gogpt-7b-bloom
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/golaxy/gogpt-7b-bloom
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_golaxy__gogpt-7b-bloom",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T07:35:20.075381](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-7b-bloom/blob/main/results_2023-09-17T07-35-20.075381.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2214765100671141,
"em_stderr": 0.004252451287967787,
"f1": 0.25772336409395996,
"f1_stderr": 0.00428261897007673,
"acc": 0.31452249408050514,
"acc_stderr": 0.006788199951115784
},
"harness|drop|3": {
"em": 0.2214765100671141,
"em_stderr": 0.004252451287967787,
"f1": 0.25772336409395996,
"f1_stderr": 0.00428261897007673
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.6290449881610103,
"acc_stderr": 0.013576399902231568
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
quarel | 2023-04-05T13:37:19.000Z | [
"language:en",
"region:us"
] | null | QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. | @inproceedings{quarel_v1,
title={QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships},
author={Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, Ashish Sabharwal},
year={2018},
journal={arXiv:1805.05377v1}
} | null | 2 | 1,462 | ---
language:
- en
paperswithcode_id: quarel
pretty_name: QuaRel
dataset_info:
features:
- name: id
dtype: string
- name: answer_index
dtype: int32
- name: logical_forms
sequence: string
- name: logical_form_pretty
dtype: string
- name: world_literals
sequence:
- name: world1
dtype: string
- name: world2
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 1072874
num_examples: 1941
- name: test
num_bytes: 307588
num_examples: 552
- name: validation
num_bytes: 154308
num_examples: 278
download_size: 631370
dataset_size: 1534770
---
# Dataset Card for "quarel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/quarel](https://allenai.org/data/quarel)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
### Dataset Summary
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
An example of 'train' looks as follows.
```
{
"answer_index": 0,
"id": "QuaRel_V1_B5_1403",
"logical_form_pretty": "qrel(time, lower, world1) -> qrel(distance, higher, world2) ; qrel(distance, higher, world1)",
"logical_forms": ["(infer (time lower world1) (distance higher world2) (distance higher world1))", "(infer (time lower world2) (distance higher world1) (distance higher world2))"],
"question": "John and Rita are going for a run. Rita gets tired and takes a break on the park bench. After twenty minutes in the park, who has run farther? (A) John (B) Rita",
"world_literals": {
"world1": ["Rita"],
"world2": ["John"]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `answer_index`: a `int32` feature.
- `logical_forms`: a `list` of `string` features.
- `logical_form_pretty`: a `string` feature.
- `world_literals`: a dictionary feature containing:
- `world1`: a `string` feature.
- `world2`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 1941| 278| 552|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{quarel_v1,
title={QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships},
author={Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, Ashish Sabharwal},
year={2018},
journal={arXiv:1805.05377v1}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
203427as321/articles | 2023-10-11T01:00:06.000Z | [
"region:us"
] | 203427as321 | null | null | null | 0 | 1,458 | ---
dataset_info:
features:
- name: label
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 23996247
num_examples: 1534
download_size: 0
dataset_size: 23996247
---
# Dataset Card for "articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shariqfarooq/cs323_densepred_seg256 | 2023-09-16T12:07:20.000Z | [
"region:us"
] | shariqfarooq | null | null | null | 0 | 1,454 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 187512341.0
num_examples: 1464
- name: val
num_bytes: 187805177.75
num_examples: 1449
download_size: 375496804
dataset_size: 375317518.75
---
# Dataset Card for "cs323_densepred_seg256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
derek-thomas/ScienceQA | 2023-02-25T04:23:01.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:other",
"task_categories:visual-question-answering",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:closed-domain-qa",
"task_ids:open-domain-qa",
"task_ids:visual-question-answering",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"multi-modal-qa",
"science",
"chemistry",
"biology",
"physics",
"earth-science",
"engineering",
"geography",
"history",
"world-history",
"civics",
"economics",
"global-studies",
"grammar",
"writing",
"vocabulary",
"natural-science",
"language-science",
"social-science",
"arxiv:2209.09513",
"region:us"
] | derek-thomas | null | null | null | 66 | 1,452 | ---
license: cc-by-sa-4.0
annotations_creators:
- expert-generated
- found
language:
- en
language_creators:
- expert-generated
- found
multilinguality:
- monolingual
paperswithcode_id: scienceqa
pretty_name: ScienceQA
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- multi-modal-qa
- science
- chemistry
- biology
- physics
- earth-science
- engineering
- geography
- history
- world-history
- civics
- economics
- global-studies
- grammar
- writing
- vocabulary
- natural-science
- language-science
- social-science
task_categories:
- multiple-choice
- question-answering
- other
- visual-question-answering
- text-classification
task_ids:
- multiple-choice-qa
- closed-domain-qa
- open-domain-qa
- visual-question-answering
- multi-class-classification
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int8
- name: hint
dtype: string
- name: task
dtype: string
- name: grade
dtype: string
- name: subject
dtype: string
- name: topic
dtype: string
- name: category
dtype: string
- name: skill
dtype: string
- name: lecture
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 16416902
num_examples: 12726
- name: validation
num_bytes: 5404896
num_examples: 4241
- name: test
num_bytes: 5441676
num_examples: 4241
download_size: 0
dataset_size: 27263474
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://scienceqa.github.io/index.html#home](https://scienceqa.github.io/index.html#home)
- **Repository:** [https://github.com/lupantech/ScienceQA](https://github.com/lupantech/ScienceQA)
- **Paper:** [https://arxiv.org/abs/2209.09513](https://arxiv.org/abs/2209.09513)
- **Leaderboard:** [https://paperswithcode.com/dataset/scienceqa](https://paperswithcode.com/dataset/scienceqa)
- **Point of Contact:** [Pan Lu](https://lupantech.github.io/) or file an issue on [Github](https://github.com/lupantech/ScienceQA/issues)
### Dataset Summary
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
### Supported Tasks and Leaderboards
Multi-modal Multiple Choice
### Languages
English
## Dataset Structure
### Data Instances
Explore more samples [here](https://scienceqa.github.io/explore.html).
``` json
{'image': Image,
'question': 'Which of these states is farthest north?',
'choices': ['West Virginia', 'Louisiana', 'Arizona', 'Oklahoma'],
'answer': 0,
'hint': '',
'task': 'closed choice',
'grade': 'grade2',
'subject': 'social science',
'topic': 'geography',
'category': 'Geography',
'skill': 'Read a map: cardinal directions',
'lecture': 'Maps have four cardinal directions, or main directions. Those directions are north, south, east, and west.\nA compass rose is a set of arrows that point to the cardinal directions. A compass rose usually shows only the first letter of each cardinal direction.\nThe north arrow points to the North Pole. On most maps, north is at the top of the map.',
'solution': 'To find the answer, look at the compass rose. Look at which way the north arrow is pointing. West Virginia is farthest north.'}
```
Some records might be missing any or all of image, lecture, solution.
### Data Fields
- `image` : Contextual image
- `question` : Prompt relating to the `lecture`
- `choices` : Multiple choice answer with 1 correct to the `question`
- `answer` : Index of choices corresponding to the correct answer
- `hint` : Hint to help answer the `question`
- `task` : Task description
- `grade` : Grade level from K-12
- `subject` : High level
- `topic` : natural-sciences, social-science, or language-science
- `category` : A subcategory of `topic`
- `skill` : A description of the task required
- `lecture` : A relevant lecture that a `question` is generated from
- `solution` : Instructions on how to solve the `question`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
- name: train
- num_bytes: 16416902
- num_examples: 12726
- name: validation
- num_bytes: 5404896
- num_examples: 4241
- name: test
- num_bytes: 5441676
- num_examples: 4241
## Dataset Creation
### Curation Rationale
When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA).
### Source Data
ScienceQA is collected from elementary and high school science curricula.
#### Initial Data Collection and Normalization
See Below
#### Who are the source language producers?
See Below
### Annotations
Questions in the ScienceQA dataset are sourced from open resources managed by IXL Learning,
an online learning platform curated by experts in the field of K-12 education. The dataset includes
problems that align with California Common Core Content Standards. To construct ScienceQA, we
downloaded the original science problems and then extracted individual components (e.g. questions,
hints, images, options, answers, lectures, and solutions) from them based on heuristic rules.
We manually removed invalid questions, such as questions that have only one choice, questions that
contain faulty data, and questions that are duplicated, to comply with fair use and transformative
use of the law. If there were multiple correct answers that applied, we kept only one correct answer.
Also, we shuffled the answer options of each question to ensure the choices do not follow any
specific pattern. To make the dataset easy to use, we then used semi-automated scripts to reformat
the lectures and solutions. Therefore, special structures in the texts, such as tables and lists, are
easily distinguishable from simple text passages. Similar to ImageNet, ReClor, and PMR datasets,
ScienceQA is available for non-commercial research purposes only and the copyright belongs to
the original authors. To ensure data quality, we developed a data exploration tool to review examples
in the collected dataset, and incorrect annotations were further manually revised by experts. The tool
can be accessed at https://scienceqa.github.io/explore.html.
#### Annotation process
See above
#### Who are the annotators?
See above
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
- Pan Lu1,3
- Swaroop Mishra2,3
- Tony Xia1
- Liang Qiu1
- Kai-Wei Chang1
- Song-Chun Zhu1
- Oyvind Tafjord3
- Peter Clark3
- Ashwin Kalyan3
From:
1. University of California, Los Angeles
2. Arizona State University
3. Allen Institute for AI
### Licensing Information
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
### Contributions
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) [@datavistics](https://github.com/datavistics) for adding this dataset. |
jxie/higgs | 2023-09-20T06:01:24.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 1,448 | ---
dataset_info:
features:
- name: inputs
sequence: float64
- name: label
dtype: float64
splits:
- name: val_16k
num_bytes: 3702368
num_examples: 15688
- name: train_10k
num_bytes: 2360000
num_examples: 10000
- name: train_1k
num_bytes: 236000
num_examples: 1000
- name: train_68k
num_bytes: 14809236
num_examples: 62751
- name: train_100k
num_bytes: 23600000
num_examples: 100000
- name: train
num_bytes: 2478000000
num_examples: 10500000
- name: test
num_bytes: 118000000
num_examples: 500000
- name: test_20k
num_bytes: 4627960
num_examples: 19610
- name: train_63k
num_bytes: 14809236
num_examples: 62751
download_size: 2168393527
dataset_size: 2660144800
---
# Dataset Card for "higgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlexanderDoria/novel17_test | 2023-07-19T12:26:36.000Z | [
"license:cc0-1.0",
"region:us"
] | AlexanderDoria | null | null | null | 6 | 1,443 | ---
license: cc0-1.0
---
|
daekeun-ml/naver-news-summarization-ko | 2023-01-10T11:12:44.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] | daekeun-ml | null | null | null | 9 | 1,435 | ---
license: apache-2.0
task_categories:
- summarization
language:
- ko
size_categories:
- 10K<n<100K
---
This dataset is a custom dataset created by the author by crawling Naver News (https://news.naver.com) for the Korean NLP model hands-on.
- Period: July 1, 2022 - July 10, 2022
- Subject: IT, economics
```
DatasetDict({
train: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 22194
})
test: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2740
})
validation: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2466
})
})
```
---
license: apache-2.0
--- |
danjacobellis/AVIRIS_256 | 2023-09-27T05:19:51.000Z | [
"region:us"
] | danjacobellis | null | null | null | 0 | 1,434 | Entry not found |
craffel/openai_lambada | 2021-10-12T20:22:47.000Z | [
"region:us"
] | craffel | LAMBADA dataset variant used by OpenAI to evaluate GPT-2 and GPT-3. | @InProceedings{paperno-EtAl:2016:P16-1,
author = {Paperno, Denis and Kruszewski, Germ\'{a}n and Lazaridou,
Angeliki and Pham, Ngoc Quan and Bernardi, Raffaella and Pezzelle,
Sandro and Baroni, Marco and Boleda, Gemma and Fernandez, Raquel},
title = {The {LAMBADA} dataset: Word prediction requiring a broad
discourse context},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers)},
month = {August},
year = {2016},
address = {Berlin, Germany},
publisher = {Association for Computational Linguistics},
pages = {1525--1534},
url = {http://www.aclweb.org/anthology/P16-1144}
} | null | 1 | 1,433 | Entry not found |
ccdv/pubmed-summarization | 2022-10-24T20:33:04.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"conditional-text-generation",
"region:us"
] | ccdv | PubMed dataset for summarization.
From paper: A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents" by A. Cohan et al.
See: https://aclanthology.org/N18-2097.pdf
See: https://github.com/armancohan/long-summarization | @inproceedings{cohan-etal-2018-discourse,
title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents",
author = "Cohan, Arman and
Dernoncourt, Franck and
Kim, Doo Soon and
Bui, Trung and
Kim, Seokhwan and
Chang, Walter and
Goharian, Nazli",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2097",
doi = "10.18653/v1/N18-2097",
pages = "615--621",
abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
} | null | 28 | 1,431 | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# PubMed dataset for summarization
Dataset for summarization of long documents.\
Adapted from this [repo](https://github.com/armancohan/long-summarization).\
Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. \
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/pubmed-summarization": ("article", "abstract")
```
### Data Fields
- `id`: paper id
- `article`: a string containing the body of the paper
- `abstract`: a string containing the abstract of the paper
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts are white space based.
| Dataset Split | Number of Instances | Avg. tokens |
| ------------- | --------------------|:----------------------|
| Train | 119,924 | 3043 / 215 |
| Validation | 6,633 | 3111 / 216 |
| Test | 6,658 | 3092 / 219 |
# Cite original article
```
@inproceedings{cohan-etal-2018-discourse,
title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents",
author = "Cohan, Arman and
Dernoncourt, Franck and
Kim, Doo Soon and
Bui, Trung and
Kim, Seokhwan and
Chang, Walter and
Goharian, Nazli",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2097",
doi = "10.18653/v1/N18-2097",
pages = "615--621",
abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
}
```
|
lmsys/chatbot_arena_conversations | 2023-09-30T01:04:44.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"license:cc",
"arxiv:2306.05685",
"region:us"
] | lmsys | null | null | null | 136 | 1,428 | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: model_a
dtype: string
- name: model_b
dtype: string
- name: winner
dtype: string
- name: judge
dtype: string
- name: conversation_a
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_b
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: anony
dtype: bool
- name: language
dtype: string
- name: tstamp
dtype: float64
- name: openai_moderation
struct:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: toxic_chat_tag
struct:
- name: roberta-large
struct:
- name: flagged
dtype: bool
- name: probability
dtype: float64
- name: t5-large
struct:
- name: flagged
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 81159839
num_examples: 33000
download_size: 41572998
dataset_size: 81159839
license: cc
task_categories:
- conversational
size_categories:
- 10K<n<100K
extra_gated_prompt: "Disclaimers and Terms\n\
- This dataset contains conversations that may be considered unsafe, offensive, or upsetting. It is not intended for training dialogue agents without applying appropriate filtering measures. We are not responsible for any outputs of the models trained on this dataset.\n\
- Statements or opinions made in this dataset do not reflect the views of researchers or institutions involved in the data collection effort.\n\
- Users of this data are responsible for ensuring its appropriate use, which includes abiding by any applicable laws and regulations.\n\
- Users of this data should adhere to the terms of use for a specific model when using its direct outputs.\n\
- Users of this data agree to not attempt to determine the identity of individuals in this dataset."
---
## Chatbot Arena Conversations Dataset
This dataset contains 33K cleaned conversations with pairwise human preferences.
It is collected from 13K unique IP addresses on the [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/) from April to June 2023.
Each sample includes a question ID, two model names, their full conversation text in OpenAI API JSON format, the user vote, the anonymized user ID, the detected language tag, the OpenAI moderation API tag, the additional toxic tag, and the timestamp.
To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII).
User consent is obtained through the "Terms of use" section on the data collection website.
In addition, we have included the OpenAI moderation API output to flag inappropriate conversations.
However, we have chosen to keep unsafe conversations intact so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process.
As an example, we included additional toxic tags that are generated by our own toxic tagger, which are trained by fine-tuning T5 and RoBERTa on manually labeled data.
**Basic Statistics**
| Key | Value |
| --- | --- |
| # Conversations | 33,000 |
| # Models | 20 |
| # Users | 13,383 |
| # Languages | 96 |
| Avg. # Turns per Sample | 1.2 |
| Avg. # Tokens per Prompt | 52.3 |
| Avg. # Tokens per Response | 189.5 |
## Uniqueness and Potential Usage
Compared to existing human preference datasets like [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1). This dataset
- Contains the outputs of 20 LLMs including stronger LLMs such as GPT-4 and Claude-v1. It also contains many failure cases of these state-of-the-art models.
- Contains unrestricted conversations from over 13K users in the wild.
We believe it will help the AI research community answer important questions around topics like:
- Characteristics and distributions of real-world user prompts
- Training instruction-following models
- Improve and evaluate LLM evaluation methods
- Model selection and request dispatching algorithms
- AI safety and content moderation
## Disclaimers and Terms
- **This dataset contains conversations that may be considered unsafe, offensive, or upsetting.** It is not intended for training dialogue agents without applying appropriate filtering measures. We are not responsible for any outputs of the models trained on this dataset.
- Statements or opinions made in this dataset do not reflect the views of researchers or institutions involved in the data collection effort.
- Users of this data are responsible for ensuring its appropriate use, which includes abiding by any applicable laws and regulations.
- Users of this data should adhere to the terms of use for a specific model when using its direct outputs.
- Users of this data agree to not attempt to determine the identity of individuals in this dataset.
## Visualization and Elo Rating Calculation
This Colab [notebook](https://colab.research.google.com/drive/1J2Wf7sxc9SVmGnSX_lImhT246pxNVZip?usp=sharing) provides some visualizations and shows how to compute Elo ratings with the dataset.
## License
The user prompts are licensed under CC-BY-4.0, while the model outputs are licensed under CC-BY-NC-4.0.
## Citation
```
@misc{zheng2023judging,
title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
year={2023},
eprint={2306.05685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
FedML/databricks-dolly-15k-niid | 2023-09-05T12:03:26.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | FedML | null | null | null | 0 | 1,424 | ---
license: cc-by-sa-3.0
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: default
default: true
data_files:
- split: train
path: "train.parquet"
- split: test
path: "test.parquet"
dataset_info:
config_name: default
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
---
This is a Non-IID split version of [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
code_x_glue_cc_clone_detection_big_clone_bench | 2022-11-18T19:30:27.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:code",
"license:c-uda",
"region:us"
] | null | Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.
The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree. | @inproceedings{svajlenko2014towards,
title={Towards a big data curated benchmark of inter-project code clones},
author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},
booktitle={2014 IEEE International Conference on Software Maintenance and Evolution},
pages={476--480},
year={2014},
organization={IEEE}
}
@inproceedings{wang2020detecting,
title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},
author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},
booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={261--271},
year={2020},
organization={IEEE}
} | null | 4 | 1,420 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: CodeXGlueCcCloneDetectionBigCloneBench
dataset_info:
features:
- name: id
dtype: int32
- name: id1
dtype: int32
- name: id2
dtype: int32
- name: func1
dtype: string
- name: func2
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 2888035757
num_examples: 901028
- name: validation
num_bytes: 1371399694
num_examples: 415416
- name: test
num_bytes: 1220662901
num_examples: 415416
download_size: 47955874
dataset_size: 5480098352
---
# Dataset Card for "code_x_glue_cc_clone_detection_big_clone_bench"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench
### Dataset Summary
CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench
Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.
The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.
### Supported Tasks and Leaderboards
- `semantic-similarity-classification`: The dataset can be used to train a model for classifying if two given java methods are cloens of each other.
### Languages
- Java **programming** language
## Dataset Structure
### Data Instances
An example of 'test' looks as follows.
```
{
"func1": " @Test(expected = GadgetException.class)\n public void malformedGadgetSpecIsCachedAndThrows() throws Exception {\n HttpRequest request = createCacheableRequest();\n expect(pipeline.execute(request)).andReturn(new HttpResponse(\"malformed junk\")).once();\n replay(pipeline);\n try {\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n fail(\"No exception thrown on bad parse\");\n } catch (GadgetException e) {\n }\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n }\n",
"func2": " public InputStream getInputStream() throws TGBrowserException {\n try {\n if (!this.isFolder()) {\n URL url = new URL(this.url);\n InputStream stream = url.openStream();\n return stream;\n }\n } catch (Throwable throwable) {\n throw new TGBrowserException(throwable);\n }\n return null;\n }\n",
"id": 0,
"id1": 2381663,
"id2": 4458076,
"label": false
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|---------------------------------------------------|
|id |int32 | Index of the sample |
|id1 |int32 | The first function id |
|id2 |int32 | The second function id |
|func1 |string| The full text of the first function |
|func2 |string| The full text of the second function |
|label |bool | 1 is the functions are not equivalent, 0 otherwise|
### Data Splits
| name |train |validation| test |
|-------|-----:|---------:|-----:|
|default|901028| 415416|415416|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was mined from the IJaDataset 2.0 dataset.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Data was manually labeled by three judges by automatically identifying potential clones using search heuristics.
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Most of the clones are type 1 and 2 with type 3 and especially type 4 being rare.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@inproceedings{svajlenko2014towards,
title={Towards a big data curated benchmark of inter-project code clones},
author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},
booktitle={2014 IEEE International Conference on Software Maintenance and Evolution},
pages={476--480},
year={2014},
organization={IEEE}
}
@inproceedings{wang2020detecting,
title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},
author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},
booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={261--271},
year={2020},
organization={IEEE}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
emozilla/pg19-test | 2023-08-08T13:07:17.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 1,418 | ---
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 40482852
num_examples: 100
download_size: 24874679
dataset_size: 40482852
---
# Dataset Card for "pg19-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.