id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
ShastriPranav/Java_QB | 2023-09-21T10:42:23.000Z | [
"region:us"
] | ShastriPranav | null | null | null | 0 | 7 | Entry not found |
mor40/tokenized_chitanka | 2023-09-21T11:25:28.000Z | [
"region:us"
] | mor40 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3200443200
num_examples: 889012
download_size: 1005331841
dataset_size: 3200443200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tokenized_chitanka"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
josedanielaromi/Arg2000 | 2023-09-22T14:02:45.000Z | [
"region:us"
] | josedanielaromi | null | null | null | 0 | 7 | Entry not found |
jwixel/pet-insurance-data-2 | 2023-09-24T17:34:59.000Z | [
"region:us"
] | jwixel | null | null | null | 0 | 7 | Another swing at pet filing data. |
kewu93/three_styles_prompted_250_512x512_50perclass_random | 2023-09-22T18:04:14.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: style_class
dtype: string
splits:
- name: train
num_bytes: 4334193.0
num_examples: 150
- name: val
num_bytes: 4317601.0
num_examples: 150
download_size: 8183790
dataset_size: 8651794.0
---
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hungeni/vn_books_10k | 2023-09-23T14:50:29.000Z | [
"region:us"
] | hungeni | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1729820957
num_examples: 10414
download_size: 906165886
dataset_size: 1729820957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vn_books_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
glukas/smd-audio-diffusion-256 | 2023-09-23T15:47:37.000Z | [
"region:us"
] | glukas | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: audio_file
dtype: string
- name: slice
dtype: int16
splits:
- name: train
num_bytes: 95076107.75
num_examples: 2834
download_size: 94963069
dataset_size: 95076107.75
---
# Dataset Card for "smd-audio-diffusion-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asparius/thomasbernhard-images | 2023-09-24T01:23:50.000Z | [
"region:us"
] | asparius | null | null | null | 0 | 7 | Entry not found |
eckendoerffer/wikipedia_fr | 2023-09-27T18:36:03.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:fr",
"license:cc-by-sa-3.0",
"wikipedia",
"wiki",
"fr.wikipedia.org",
"region:us"
] | eckendoerffer | null | null | null | 0 | 7 | ---
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- fr
tags:
- wikipedia
- wiki
- fr.wikipedia.org
size_categories:
- 1M<n<10M
---
# French Wikipedia Dataset
## Overview
This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the [official French Wikipedia site](https://fr.wikipedia.org/) on September 24, 2023.
There are already numerous datasets for Wikipedia, including the official one with [Wikipedia's dump](https://huggingface.co/datasets/wikipedia). Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.
As the saying goes, "garbage in, garbage out."
## Format
- **Type**: Text
- **File Extension**: `.txt`
## Structure
The dataset is divided into the following splits:
- `train.txt`: 3.45 GB - 1,810,000 rows - 90%
- `test.txt` : 192 MB - 100,575 rows - 5%
- `valid.txt`: 192 MB - 100,575 rows - 5%
Each article in the dataset exceeds 1400 characters in length.
## Data Cleaning and Preprocessing
The following elements have been excluded from the dataset:
- H1 - H4 Headings
- Lists
- Tables
- Sources and References
- Info box
- Banners
- LaTeX code
The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.
## Exploring the Dataset
You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.
## Additional Information
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.
---
# WIKIPEDIA EXTRACT
Inside the `/extract_wiki/` directory, you'll find Python scripts used to extract text to compile this dataset.
## Requirements:
```python
pip install datasets aiohttp aiofiles beautifulsoup4 langid
```
## Scripts:
1. **1_extract_link.py**
```python
python 1_extract_link.py
```
Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.
2. **2_extract_content.py**
```python
python 2_extract_content.py
```
This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.
3. **3_extract_txt.py**
```python
python 3_extract_txt.py
```
This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc.
|
serhatkurt/data | 2023-09-24T21:13:54.000Z | [
"region:us"
] | serhatkurt | null | null | null | 0 | 7 | Entry not found |
Avinash7509/Singleton_Train | 2023-09-26T21:47:01.000Z | [
"license:openrail",
"region:us"
] | Avinash7509 | null | null | null | 0 | 7 | ---
license: openrail
---
|
Brecon/Master_Train_Test | 2023-09-25T02:29:22.000Z | [
"region:us"
] | Brecon | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 446853.7995594714
num_examples: 363
- name: test
num_bytes: 112021.20044052863
num_examples: 91
download_size: 319014
dataset_size: 558875.0
---
# Dataset Card for "Master_Train_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
passionMan/mnist_by_class | 2023-09-27T12:56:15.000Z | [
"region:us"
] | passionMan | null | null | null | 0 | 7 | Entry not found |
M-A-D/Mixed-Arabic-Dataset-Main | 2023-10-06T17:56:33.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:summarization",
"language:ar",
"region:us"
] | M-A-D | null | null | null | 0 | 7 | ---
language:
- ar
task_categories:
- conversational
- text-generation
- text2text-generation
- translation
- summarization
pretty_name: MAD
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: GenId
dtype: int64
- name: SubId
dtype: int64
- name: DatasetName
dtype: string
- name: DatasetLink
dtype: string
- name: Text
dtype: string
- name: MetaData
struct:
- name: AboutAuthor
dtype: string
- name: AboutBook
dtype: string
- name: Author
dtype: string
- name: AuthorName
dtype: string
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: Tags
dtype: float64
- name: __index_level_0__
dtype: float64
- name: created_date
dtype: string
- name: deleted
dtype: bool
- name: detoxify
dtype: 'null'
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: id
dtype: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: lang
dtype: string
- name: message_id
dtype: string
- name: message_tree_id
dtype: string
- name: model_name
dtype: 'null'
- name: parent_id
dtype: string
- name: query_id
dtype: string
- name: rank
dtype: float64
- name: review_count
dtype: float64
- name: review_result
dtype: bool
- name: role
dtype: string
- name: synthetic
dtype: bool
- name: title
dtype: string
- name: tree_state
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: ConcatenatedText
dtype: int64
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 1990497610
num_examples: 131393
download_size: 790648134
dataset_size: 1990497610
---
# Dataset Card for "Mixed-Arabic-Dataset"
## Mixed Arabic Datasets (MAD)
The Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.
### MAD Repo vs. MAD Main
#### MAD Repo
- **Versatility**: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.
- **Independent Access**: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.
#### MAD Main or simply MAD
- **Unified Dataframe**: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.
- **Holistic Perspective**: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.
### Why MAD Main?
- **Efficiency**: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.
- **Interoperability**: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.
- **Meta-Analysis**: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.
### Getting Started
- To access individual datasets in their original form, refer to the MAD Repository ([Link to MAD Repo](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo)).
- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.
```python
from datasets import load_dataset
dataset = load_dataset("M-A-D/Mixed-Arabic-Dataset-Main")
```
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1w7_7lL6w7nM9DcDmTZe1Vfiwkio6SA-w?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` |
afern24/common_voice_13_0_dv_preprocessed | 2023-09-27T09:48:04.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | afern24 | null | null | null | 0 | 7 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- 1K<n<10K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 1M<n<10M
bg:
- 10K<n<100K
bn:
- 1M<n<10M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 100K<n<1M
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
dyu:
- n<1K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 1M<n<10M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 100K<n<1M
ga-IE:
- 10K<n<100K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 10K<n<100K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
is:
- n<1K
it:
- 100K<n<1M
ja:
- 100K<n<1M
ka:
- 10K<n<100K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ko:
- 1K<n<10K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lo:
- n<1K
lt:
- 10K<n<100K
lv:
- 10K<n<100K
mdf:
- n<1K
mhr:
- 100K<n<1M
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mrj:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
oc:
- 1K<n<10K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
quy:
- n<1K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- 1K<n<10K
sk:
- 10K<n<100K
skr:
- 1K<n<10K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
ti:
- n<1K
tig:
- n<1K
tk:
- 1K<n<10K
tok:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
tw:
- n<1K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yo:
- 1K<n<10K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 13.0
language_bcp47:
- ab
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lo
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nan-tw
- ne-NP
- nl
- nn-NO
- oc
- or
- pa-IN
- pl
- pt
- quy
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sr
- sv-SE
- sw
- ta
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yo
- yue
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
---
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
print(next(iter(cv_13)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
dataloader = DataLoader(cv_13, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
``` |
Cris-AV/Llama-Math-format | 2023-09-25T18:41:56.000Z | [
"region:us"
] | Cris-AV | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10269
num_examples: 50
download_size: 0
dataset_size: 10269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Llama-Math-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kewu93/three_styles_prompted_all_512x512_excluded_training | 2023-09-25T22:30:01.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: style_class
dtype: string
splits:
- name: train
num_bytes: 7284057.537128714
num_examples: 300
- name: val
num_bytes: 4317601.0
num_examples: 150
download_size: 12016133
dataset_size: 11601658.537128713
---
# Dataset Card for "three_styles_prompted_all_512x512_excluded_training"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna/glue | 2023-10-04T14:05:09.000Z | [
"region:us"
] | polinaeterna | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | null | 0 | 7 | Entry not found |
Tzzey/test | 2023-09-27T21:20:56.000Z | [
"region:us"
] | Tzzey | null | null | null | 0 | 7 | Entry not found |
woo2/gpt2sql_bank | 2023-09-29T13:49:01.000Z | [
"region:us"
] | woo2 | null | null | null | 0 | 7 | Entry not found |
Abhitej5965/textToDDLQuery | 2023-09-28T11:23:10.000Z | [
"license:apache-2.0",
"region:us"
] | Abhitej5965 | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
lemmylemmy/code_scheme_data | 2023-09-29T09:15:34.000Z | [
"region:us"
] | lemmylemmy | null | null | null | 0 | 7 | Entry not found |
juraj-juraj/doc_gen | 2023-09-29T09:10:24.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"region:us"
] | juraj-juraj | null | null | null | 0 | 7 | ---
language:
- en
license: mit
task_categories:
- text-generation
pretty_name: py_code_doc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: docstring
dtype: string
- name: function
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 525428666
num_examples: 502378
- name: validation
num_bytes: 624971
num_examples: 459
- name: test
num_bytes: 673898
num_examples: 666
download_size: 198280913
dataset_size: 526727535
---
# Code documentation dataset
This dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation
Content of dataset is created from CodeSearchNet dataset. |
oscorrea/scores-h-curated-28-09 | 2023-09-29T01:20:28.000Z | [
"region:us"
] | oscorrea | null | null | null | 0 | 7 | Entry not found |
anirudh-sub/paradigms_small | 2023-09-29T02:26:55.000Z | [
"region:us"
] | anirudh-sub | null | null | null | 0 | 7 | Entry not found |
FunPang/medical_dataset | 2023-09-29T07:47:28.000Z | [
"region:us"
] | FunPang | null | null | null | 0 | 7 | Entry not found |
liyucheng/mmlu_mini | 2023-09-29T13:02:02.000Z | [
"region:us"
] | liyucheng | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
- name: task
dtype: string
splits:
- name: val
num_bytes: 494633.0905282202
num_examples: 1000
- name: test
num_bytes: 489506.01082613575
num_examples: 1000
- name: train
num_bytes: 435903.50877192983
num_examples: 1000
download_size: 587231
dataset_size: 1420042.6101262858
---
# Dataset Card for "mmlu_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lolz14/mine | 2023-10-10T06:08:49.000Z | [
"license:mit",
"region:us"
] | Lolz14 | null | null | null | 0 | 7 | ---
license: mit
---
|
TheVarunKaushik/VEXQuestions | 2023-09-29T21:18:14.000Z | [
"region:us"
] | TheVarunKaushik | null | null | null | 0 | 7 | Entry not found |
MaxReynolds/Lee_Souder_RocketLauncher | 2023-09-30T01:57:33.000Z | [
"region:us"
] | MaxReynolds | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 279829.0
num_examples: 28
download_size: 0
dataset_size: 279829.0
---
# Dataset Card for "Lee_Souder_RocketLauncher"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tiax/demo | 2023-09-30T05:30:06.000Z | [
"license:apache-2.0",
"region:us"
] | Tiax | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
cbasconc/instructions_Device | 2023-10-09T21:01:17.000Z | [
"language:es",
"region:us"
] | cbasconc | null | null | null | 0 | 7 | ---
language:
- es
pretty_name: devices_clasification
--- |
Photolens/alpaca-cleaned-airoboros-2.1-no-code-oasst1-en-merged | 2023-10-01T05:39:23.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | Photolens | null | null | null | 2 | 7 | ---
language:
- en
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 139998943
num_examples: 107177
download_size: 73347915
dataset_size: 139998943
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is a merged dataset of:
- [Photolens/alpaca-cleaned](https://huggingface.co/datasets/Photolens/alpaca-cleaned)
- [Photolens/airoboros-2.1-no-code](https://huggingface.co/datasets/Photolens/airoboros-2.1-no-code)
- [Photolens/oasst1-en](https://huggingface.co/datasets/Photolens/oasst1-en) |
nikchar/retrieval_verification_bm25_roberta | 2023-10-01T09:05:46.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: text
dtype: string
- name: retrieved_evidence_title
sequence: string
- name: retrieved_evidence_text
sequence: string
- name: labels
dtype: int64
- name: Retrieval_Success
dtype: bool
- name: Predicted_Labels
dtype: int64
- name: Predicted_Labels_Each_doc
sequence: int64
splits:
- name: train
num_bytes: 66031496
num_examples: 11073
download_size: 30811974
dataset_size: 66031496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "retrieval_verification_bm25_roberta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pjaekae/automotive_engineering | 2023-10-02T16:34:11.000Z | [
"license:apache-2.0",
"region:us"
] | pjaekae | null | null | null | 0 | 7 | ---
license: apache-2.0
---
Synthetic data generated with GPT-3.5 |
BiancaZYCao/food_caption | 2023-10-01T15:46:02.000Z | [
"region:us"
] | BiancaZYCao | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 602700071.1864096
num_examples: 2679713
download_size: 469085661
dataset_size: 602700071.1864096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "food_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tomaarsen/MultiCoNER | 2023-10-01T19:39:19.000Z | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:bn",
"language:de",
"language:en",
"language:es",
"language:fa",
"language:hi",
"language:ko",
"language:nl",
"language:ru",
"language:tr",
"language:zh",
"language:multilingual",
"license:cc-by-4.0",
"multi... | tomaarsen | We present MultiCoNER, a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation. We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves moderate performance (macro-F1=54%), highlighting the difficulty of our data. GEMNET, which uses gazetteers, improvement significantly (average improvement of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained language models, and we believe that it can help further research in building robust NER systems. MultiCoNER is publicly available at https://registry.opendata.aws/multiconer/ and we hope that this resource will help advance research in various aspects of NER. | @misc{malmasi2022multiconer,
title={MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition},
author={Shervin Malmasi and Anjie Fang and Besnik Fetahu and Sudipta Kar and Oleg Rokhlenko},
year={2022},
eprint={2208.14536},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 7 | ---
license: cc-by-4.0
task_categories:
- token-classification
language:
- bn
- de
- en
- es
- fa
- hi
- ko
- nl
- ru
- tr
- zh
- multilingual
tags:
- multiconer
- ner
- multilingual
- named entity recognition
size_categories:
- 100K<n<1M
dataset_info:
- config_name: bn
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 5616369
num_examples: 15300
- name: validation
num_bytes: 301806
num_examples: 800
- name: test
num_bytes: 21668288
num_examples: 133119
download_size: 31446032
dataset_size: 27586463
- config_name: de
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4056698
num_examples: 15300
- name: validation
num_bytes: 214572
num_examples: 800
- name: test
num_bytes: 37113304
num_examples: 217824
download_size: 44089736
dataset_size: 41384574
- config_name: en
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4330080
num_examples: 15300
- name: validation
num_bytes: 229689
num_examples: 800
- name: test
num_bytes: 38728401
num_examples: 217818
download_size: 44709663
dataset_size: 43288170
- config_name: es
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4576557
num_examples: 15300
- name: validation
num_bytes: 238872
num_examples: 800
- name: test
num_bytes: 41457435
num_examples: 217887
download_size: 46861727
dataset_size: 46272864
- config_name: fa
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 5550551
num_examples: 15300
- name: validation
num_bytes: 294184
num_examples: 800
- name: test
num_bytes: 30301688
num_examples: 165702
download_size: 38042406
dataset_size: 36146423
- config_name: hi
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 6189324
num_examples: 15300
- name: validation
num_bytes: 321246
num_examples: 800
- name: test
num_bytes: 25771882
num_examples: 141565
download_size: 35165171
dataset_size: 32282452
- config_name: ko
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4439652
num_examples: 15300
- name: validation
num_bytes: 233963
num_examples: 800
- name: test
num_bytes: 27529239
num_examples: 178249
download_size: 35281170
dataset_size: 32202854
- config_name: mix
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 307844
num_examples: 1500
- name: validation
num_bytes: 100909
num_examples: 500
- name: test
num_bytes: 20218549
num_examples: 100000
download_size: 21802985
dataset_size: 20627302
- config_name: multi
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 54119956
num_examples: 168300
- name: validation
num_bytes: 2846552
num_examples: 8800
- name: test
num_bytes: 91509480
num_examples: 471911
download_size: 148733494
dataset_size: 148475988
- config_name: nl
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4070487
num_examples: 15300
- name: validation
num_bytes: 209337
num_examples: 800
- name: test
num_bytes: 37128925
num_examples: 217337
download_size: 43263864
dataset_size: 41408749
- config_name: ru
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 5313989
num_examples: 15300
- name: validation
num_bytes: 279470
num_examples: 800
- name: test
num_bytes: 47458726
num_examples: 217501
download_size: 54587257
dataset_size: 53052185
- config_name: tr
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 4076774
num_examples: 15300
- name: validation
num_bytes: 213017
num_examples: 800
- name: test
num_bytes: 14779846
num_examples: 136935
download_size: 22825291
dataset_size: 19069637
- config_name: zh
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-CORP
'6': I-CORP
'7': B-GRP
'8': I-GRP
'9': B-PROD
'10': I-PROD
'11': B-CW
'12': I-CW
splits:
- name: train
num_bytes: 5899475
num_examples: 15300
- name: validation
num_bytes: 310396
num_examples: 800
- name: test
num_bytes: 29349271
num_examples: 151661
download_size: 36101525
dataset_size: 35559142
---
# Multilingual Complex Named Entity Recognition (MultiCoNER)
## Dataset Summary
MultiCoNER (version 1) is a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation.
See the [AWS Open Data Registry entry for MultiCoNER](https://registry.opendata.aws/multiconer/) for more information.
## Labels
* `PER`: Person, i.e. names of people
* `LOC`: Location, i.e. locations/physical facilities
* `CORP`: Corporation, i.e. corporations/businesses
* `GRP`: Groups, i.e. all other groups
* `PROD`: Product, i.e. consumer products
* `CW`: Creative Work, i.e. movies/songs/book titles
### Dataset Structure
The dataset follows the IOB format of CoNLL. In particular, it uses the following label to ID mapping:
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-CORP": 5,
"I-CORP": 6,
"B-GRP": 7,
"I-GRP": 8,
"B-PROD": 9,
"I-PROD": 10,
"B-CW": 11,
"I-CW": 12,
}
```
## Languages
The MultiCoNER dataset consists of the following languages: Bangla, German, English, Spanish, Farsi, Hindi, Korean, Dutch, Russian, Turkish and Chinese.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset('tomaarsen/MultiCoNER', 'multi')
```
## License
CC BY 4.0
## Citation
```
@misc{malmasi2022multiconer,
title={MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition},
author={Shervin Malmasi and Anjie Fang and Besnik Fetahu and Sudipta Kar and Oleg Rokhlenko},
year={2022},
eprint={2208.14536},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Arabic-Clip/Arabic_dataset_1M_translated_jsonl_format_ViT-B-16-plus-240 | 2023-10-02T07:16:07.000Z | [
"region:us"
] | Arabic-Clip | null | null | null | 0 | 7 | This translation done using [https://huggingface.co/Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) |
hmao/rule_learning_data_v0_w_old_instruction | 2023-10-01T19:33:57.000Z | [
"region:us"
] | hmao | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: old_instruction
dtype: string
- name: prompt
dtype: string
- name: rule
dtype: string
- name: filepath
dtype: string
- name: description
dtype: string
- name: configuration
dtype: string
- name: reference
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 20294349
num_examples: 6678
download_size: 7247647
dataset_size: 20294349
---
# Dataset Card for "rule_learning_data_v0_w_old_instruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FelixdoingAI/ip2p-adwm-5000 | 2023-10-03T03:52:47.000Z | [
"region:us"
] | FelixdoingAI | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
- name: adversarial_images
dtype: image
splits:
- name: train
num_bytes: 3079160216.0
num_examples: 5000
download_size: 3079020486
dataset_size: 3079160216.0
---
# Dataset Card for "ip2p-adwm-5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dakadkart/consumer_industril | 2023-10-03T06:56:11.000Z | [
"region:us"
] | dakadkart | null | null | null | 0 | 7 | Entry not found |
hippocrates/medMCQA_test | 2023-10-03T12:30:11.000Z | [
"region:us"
] | hippocrates | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 92341390
num_examples: 182822
- name: valid
num_bytes: 2211041
num_examples: 4183
- name: test
num_bytes: 2211041
num_examples: 4183
download_size: 37750887
dataset_size: 96763472
---
# Dataset Card for "medMCQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PericlesSavio/resumo | 2023-10-03T17:47:52.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"licens... | PericlesSavio | null | null | null | 0 | 7 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
tags:
- dialogue-summary
- one-liner-summary
- meeting-title
- email-subject
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
CC BY-NC-SA 4.0
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
relaxtraffic/metartmodels | 2023-10-03T17:35:35.000Z | [
"region:us"
] | relaxtraffic | null | null | null | 0 | 7 | Entry not found |
hails/bigbench | 2023-10-05T16:23:41.000Z | [
"region:us"
] | hails | null | null | null | 1 | 7 | ---
dataset_info:
- config_name: abstract_narrative_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6560069
num_examples: 3000
- name: train
num_bytes: 5249819
num_examples: 2400
- name: validation
num_bytes: 1310250
num_examples: 600
download_size: 0
dataset_size: 13120138
- config_name: anachronisms_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 48826
num_examples: 230
- name: train
num_bytes: 39116
num_examples: 184
- name: validation
num_bytes: 9710
num_examples: 46
download_size: 0
dataset_size: 97652
- config_name: analogical_similarity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1373815
num_examples: 323
- name: train
num_bytes: 1101512
num_examples: 259
- name: validation
num_bytes: 272303
num_examples: 64
download_size: 0
dataset_size: 2747630
- config_name: analytic_entailment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 17316
num_examples: 70
- name: train
num_bytes: 13368
num_examples: 54
- name: validation
num_bytes: 3948
num_examples: 16
download_size: 0
dataset_size: 34632
- config_name: arithmetic_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3833272
num_examples: 15023
- name: train
num_bytes: 3066775
num_examples: 12019
- name: validation
num_bytes: 766497
num_examples: 3004
download_size: 0
dataset_size: 7666544
- config_name: ascii_word_recognition_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 4984662
num_examples: 5000
- name: train
num_bytes: 3997273
num_examples: 4000
- name: validation
num_bytes: 987389
num_examples: 1000
download_size: 0
dataset_size: 9969324
- config_name: authorship_verification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 14118592
num_examples: 880
- name: train
num_bytes: 11288481
num_examples: 704
- name: validation
num_bytes: 2830111
num_examples: 176
download_size: 0
dataset_size: 28237184
- config_name: auto_categorization_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 40549
num_examples: 328
- name: train
num_bytes: 32992
num_examples: 263
- name: validation
num_bytes: 7557
num_examples: 65
download_size: 0
dataset_size: 81098
- config_name: auto_debugging_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 5112
num_examples: 34
- name: train
num_bytes: 2651
num_examples: 18
- name: validation
num_bytes: 2461
num_examples: 16
download_size: 0
dataset_size: 10224
- config_name: bbq_lite_json_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6890493
num_examples: 16076
- name: train
num_bytes: 5508584
num_examples: 12866
- name: validation
num_bytes: 1381909
num_examples: 3210
download_size: 0
dataset_size: 13780986
- config_name: bridging_anaphora_resolution_barqa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1971015
num_examples: 648
- name: train
num_bytes: 1537264
num_examples: 519
- name: validation
num_bytes: 433751
num_examples: 129
download_size: 0
dataset_size: 3942030
- config_name: causal_judgment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 204878
num_examples: 190
- name: train
num_bytes: 164940
num_examples: 152
- name: validation
num_bytes: 39938
num_examples: 38
download_size: 0
dataset_size: 409756
- config_name: cause_and_effect_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 49314
num_examples: 153
- name: train
num_bytes: 39620
num_examples: 123
- name: validation
num_bytes: 9694
num_examples: 30
download_size: 0
dataset_size: 98628
- config_name: checkmate_in_one_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3123256
num_examples: 3498
- name: train
num_bytes: 2502314
num_examples: 2799
- name: validation
num_bytes: 620942
num_examples: 699
download_size: 0
dataset_size: 6246512
- config_name: chess_state_tracking_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3269932
num_examples: 6000
- name: train
num_bytes: 2616294
num_examples: 4800
- name: validation
num_bytes: 653638
num_examples: 1200
download_size: 0
dataset_size: 6539864
- config_name: chinese_remainder_theorem_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 153222
num_examples: 500
- name: train
num_bytes: 122601
num_examples: 400
- name: validation
num_bytes: 30621
num_examples: 100
download_size: 0
dataset_size: 306444
- config_name: cifar10_classification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 111022200
num_examples: 20000
- name: train
num_bytes: 88782724
num_examples: 16000
- name: validation
num_bytes: 22239476
num_examples: 4000
download_size: 0
dataset_size: 222044400
- config_name: code_line_description_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 33670
num_examples: 60
- name: train
num_bytes: 25530
num_examples: 44
- name: validation
num_bytes: 8140
num_examples: 16
download_size: 0
dataset_size: 67340
- config_name: codenames_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 25195
num_examples: 85
- name: train
num_bytes: 19964
num_examples: 68
- name: validation
num_bytes: 5231
num_examples: 17
download_size: 0
dataset_size: 50390
- config_name: color_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1633263
num_examples: 4000
- name: train
num_bytes: 1306663
num_examples: 3200
- name: validation
num_bytes: 326600
num_examples: 800
download_size: 0
dataset_size: 3266526
- config_name: common_morpheme_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 12388
num_examples: 50
- name: train
num_bytes: 8444
num_examples: 34
- name: validation
num_bytes: 3944
num_examples: 16
download_size: 0
dataset_size: 24776
- config_name: conceptual_combinations_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 58859
num_examples: 103
- name: train
num_bytes: 48010
num_examples: 84
- name: validation
num_bytes: 10849
num_examples: 19
download_size: 0
dataset_size: 117718
- config_name: conlang_translation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 215190
num_examples: 164
- name: train
num_bytes: 173024
num_examples: 132
- name: validation
num_bytes: 42166
num_examples: 32
download_size: 0
dataset_size: 430380
- config_name: contextual_parametric_knowledge_conflicts_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 14587554
num_examples: 17528
- name: train
num_bytes: 11666236
num_examples: 14023
- name: validation
num_bytes: 2921318
num_examples: 3505
download_size: 0
dataset_size: 29175108
- config_name: crash_blossom_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 12194
num_examples: 38
- name: train
num_bytes: 6999
num_examples: 22
- name: validation
num_bytes: 5195
num_examples: 16
download_size: 0
dataset_size: 24388
- config_name: crass_ai_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22870
num_examples: 44
- name: train
num_bytes: 14130
num_examples: 28
- name: validation
num_bytes: 8740
num_examples: 16
download_size: 0
dataset_size: 45740
- config_name: cryobiology_spanish_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 38674
num_examples: 146
- name: train
num_bytes: 31129
num_examples: 117
- name: validation
num_bytes: 7545
num_examples: 29
download_size: 0
dataset_size: 77348
- config_name: cryptonite_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2844402
num_examples: 26157
- name: train
num_bytes: 2275724
num_examples: 20926
- name: validation
num_bytes: 568678
num_examples: 5231
download_size: 0
dataset_size: 5688804
- config_name: cs_algorithms_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 272435
num_examples: 1320
- name: train
num_bytes: 218192
num_examples: 1056
- name: validation
num_bytes: 54243
num_examples: 264
download_size: 0
dataset_size: 544870
- config_name: dark_humor_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 26556
num_examples: 80
- name: train
num_bytes: 21267
num_examples: 64
- name: validation
num_bytes: 5289
num_examples: 16
download_size: 0
dataset_size: 53112
- config_name: date_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 94908
num_examples: 369
- name: train
num_bytes: 76165
num_examples: 296
- name: validation
num_bytes: 18743
num_examples: 73
download_size: 0
dataset_size: 189816
- config_name: disambiguation_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 122471
num_examples: 258
- name: train
num_bytes: 98687
num_examples: 207
- name: validation
num_bytes: 23784
num_examples: 51
download_size: 0
dataset_size: 244942
- config_name: discourse_marker_prediction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2090684
num_examples: 857
- name: train
num_bytes: 1666052
num_examples: 686
- name: validation
num_bytes: 424632
num_examples: 171
download_size: 0
dataset_size: 4181368
- config_name: disfl_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7964775
num_examples: 8000
- name: train
num_bytes: 6376511
num_examples: 6400
- name: validation
num_bytes: 1588264
num_examples: 1600
download_size: 0
dataset_size: 15929550
- config_name: dyck_languages_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1227916
num_examples: 1000
- name: train
num_bytes: 982680
num_examples: 800
- name: validation
num_bytes: 245236
num_examples: 200
download_size: 0
dataset_size: 2455832
- config_name: elementary_math_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 13442550
num_examples: 38160
- name: train
num_bytes: 10766969
num_examples: 30531
- name: validation
num_bytes: 2675581
num_examples: 7629
download_size: 0
dataset_size: 26885100
- config_name: emoji_movie_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 33667
num_examples: 100
- name: train
num_bytes: 26987
num_examples: 80
- name: validation
num_bytes: 6680
num_examples: 20
download_size: 0
dataset_size: 67334
- config_name: emojis_emotion_prediction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 47983
num_examples: 131
- name: train
num_bytes: 38458
num_examples: 105
- name: validation
num_bytes: 9525
num_examples: 26
download_size: 0
dataset_size: 95966
- config_name: empirical_judgments_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 47499
num_examples: 99
- name: train
num_bytes: 38346
num_examples: 80
- name: validation
num_bytes: 9153
num_examples: 19
download_size: 0
dataset_size: 94998
- config_name: english_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22530
num_examples: 34
- name: train
num_bytes: 12066
num_examples: 18
- name: validation
num_bytes: 10464
num_examples: 16
download_size: 0
dataset_size: 45060
- config_name: english_russian_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 59900
num_examples: 80
- name: train
num_bytes: 48051
num_examples: 64
- name: validation
num_bytes: 11849
num_examples: 16
download_size: 0
dataset_size: 119800
- config_name: entailed_polarity_hindi_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 57052
num_examples: 138
- name: train
num_bytes: 45829
num_examples: 111
- name: validation
num_bytes: 11223
num_examples: 27
download_size: 0
dataset_size: 114104
- config_name: entailed_polarity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 25421
num_examples: 148
- name: train
num_bytes: 20350
num_examples: 119
- name: validation
num_bytes: 5071
num_examples: 29
download_size: 0
dataset_size: 50842
- config_name: epistemic_reasoning_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 887158
num_examples: 2000
- name: train
num_bytes: 710107
num_examples: 1600
- name: validation
num_bytes: 177051
num_examples: 400
download_size: 0
dataset_size: 1774316
- config_name: evaluating_information_essentiality_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 77488
num_examples: 68
- name: train
num_bytes: 59596
num_examples: 52
- name: validation
num_bytes: 17892
num_examples: 16
download_size: 0
dataset_size: 154976
- config_name: fact_checker_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1337384
num_examples: 7154
- name: train
num_bytes: 1070750
num_examples: 5724
- name: validation
num_bytes: 266634
num_examples: 1430
download_size: 0
dataset_size: 2674768
- config_name: fantasy_reasoning_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 75886
num_examples: 201
- name: train
num_bytes: 61398
num_examples: 161
- name: validation
num_bytes: 14488
num_examples: 40
download_size: 0
dataset_size: 151772
- config_name: few_shot_nlg_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 75937
num_examples: 153
- name: train
num_bytes: 61862
num_examples: 123
- name: validation
num_bytes: 14075
num_examples: 30
download_size: 0
dataset_size: 151874
- config_name: figure_of_speech_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 21717
num_examples: 59
- name: train
num_bytes: 15962
num_examples: 43
- name: validation
num_bytes: 5755
num_examples: 16
download_size: 0
dataset_size: 43434
- config_name: formal_fallacies_syllogisms_negation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 8314653
num_examples: 14200
- name: train
num_bytes: 6652955
num_examples: 11360
- name: validation
num_bytes: 1661698
num_examples: 2840
download_size: 0
dataset_size: 16629306
- config_name: gem_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 36065281
num_examples: 14802
- name: train
num_bytes: 28819497
num_examples: 11845
- name: validation
num_bytes: 7245784
num_examples: 2957
download_size: 0
dataset_size: 72130562
- config_name: gender_inclusive_sentences_german_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 126881
num_examples: 200
- name: train
num_bytes: 100628
num_examples: 160
- name: validation
num_bytes: 26253
num_examples: 40
download_size: 0
dataset_size: 253762
- config_name: general_knowledge_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 21828
num_examples: 70
- name: train
num_bytes: 16818
num_examples: 54
- name: validation
num_bytes: 5010
num_examples: 16
download_size: 0
dataset_size: 43656
- config_name: geometric_shapes_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 180094
num_examples: 359
- name: train
num_bytes: 144602
num_examples: 288
- name: validation
num_bytes: 35492
num_examples: 71
download_size: 0
dataset_size: 360188
- config_name: goal_step_wikihow_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3567615
num_examples: 7053
- name: train
num_bytes: 2853871
num_examples: 5643
- name: validation
num_bytes: 713744
num_examples: 1410
download_size: 0
dataset_size: 7135230
- config_name: gre_reading_comprehension_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 94273
num_examples: 31
- name: train
num_bytes: 44458
num_examples: 15
- name: validation
num_bytes: 49815
num_examples: 16
download_size: 0
dataset_size: 188546
- config_name: hhh_alignment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 272898
num_examples: 221
- name: train
num_bytes: 212488
num_examples: 179
- name: validation
num_bytes: 60410
num_examples: 42
download_size: 0
dataset_size: 545796
- config_name: hindi_question_answering_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 15154954
num_examples: 6610
- name: train
num_bytes: 11983837
num_examples: 5288
- name: validation
num_bytes: 3171117
num_examples: 1322
download_size: 0
dataset_size: 30309908
- config_name: hindu_knowledge_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 44092
num_examples: 175
- name: train
num_bytes: 35392
num_examples: 140
- name: validation
num_bytes: 8700
num_examples: 35
download_size: 0
dataset_size: 88184
- config_name: hinglish_toxicity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 60613
num_examples: 200
- name: train
num_bytes: 49997
num_examples: 160
- name: validation
num_bytes: 10616
num_examples: 40
download_size: 0
dataset_size: 121226
- config_name: human_organs_senses_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7944
num_examples: 42
- name: train
num_bytes: 4873
num_examples: 26
- name: validation
num_bytes: 3071
num_examples: 16
download_size: 0
dataset_size: 15888
- config_name: hyperbaton_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 9383986
num_examples: 50000
- name: train
num_bytes: 7509334
num_examples: 40000
- name: validation
num_bytes: 1874652
num_examples: 10000
download_size: 0
dataset_size: 18767972
- config_name: identify_math_theorems_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 104841
num_examples: 53
- name: train
num_bytes: 70295
num_examples: 37
- name: validation
num_bytes: 34546
num_examples: 16
download_size: 0
dataset_size: 209682
- config_name: identify_odd_metaphor_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 27602
num_examples: 47
- name: train
num_bytes: 18138
num_examples: 31
- name: validation
num_bytes: 9464
num_examples: 16
download_size: 0
dataset_size: 55204
- config_name: implicatures_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 91683
num_examples: 492
- name: train
num_bytes: 73416
num_examples: 394
- name: validation
num_bytes: 18267
num_examples: 98
download_size: 0
dataset_size: 183366
- config_name: implicit_relations_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 79710
num_examples: 85
- name: train
num_bytes: 64346
num_examples: 68
- name: validation
num_bytes: 15364
num_examples: 17
download_size: 0
dataset_size: 159420
- config_name: intent_recognition_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 322371
num_examples: 693
- name: train
num_bytes: 257864
num_examples: 555
- name: validation
num_bytes: 64507
num_examples: 138
download_size: 0
dataset_size: 644742
- config_name: international_phonetic_alphabet_nli_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 79320
num_examples: 126
- name: train
num_bytes: 63288
num_examples: 101
- name: validation
num_bytes: 16032
num_examples: 25
download_size: 0
dataset_size: 158640
- config_name: international_phonetic_alphabet_transliterate_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 275938
num_examples: 1003
- name: train
num_bytes: 220784
num_examples: 803
- name: validation
num_bytes: 55154
num_examples: 200
download_size: 0
dataset_size: 551876
- config_name: intersect_geometry_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 211674752
num_examples: 249999
- name: train
num_bytes: 169332898
num_examples: 200000
- name: validation
num_bytes: 42341854
num_examples: 49999
download_size: 0
dataset_size: 423349504
- config_name: irony_identification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 28178
num_examples: 99
- name: train
num_bytes: 22918
num_examples: 80
- name: validation
num_bytes: 5260
num_examples: 19
download_size: 0
dataset_size: 56356
- config_name: kanji_ascii_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 366946
num_examples: 1092
- name: train
num_bytes: 293933
num_examples: 875
- name: validation
num_bytes: 73013
num_examples: 217
download_size: 0
dataset_size: 733892
- config_name: kannada_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 140638
num_examples: 316
- name: train
num_bytes: 111865
num_examples: 253
- name: validation
num_bytes: 28773
num_examples: 63
download_size: 0
dataset_size: 281276
- config_name: key_value_maps_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 105136
num_examples: 101
- name: train
num_bytes: 84317
num_examples: 80
- name: validation
num_bytes: 20819
num_examples: 21
download_size: 0
dataset_size: 210272
- config_name: known_unknowns_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7960
num_examples: 46
- name: train
num_bytes: 5130
num_examples: 30
- name: validation
num_bytes: 2830
num_examples: 16
download_size: 0
dataset_size: 15920
- config_name: language_games_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 979619
num_examples: 2128
- name: train
num_bytes: 783111
num_examples: 1704
- name: validation
num_bytes: 196508
num_examples: 424
download_size: 0
dataset_size: 1959238
- config_name: language_identification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7376223
num_examples: 10000
- name: train
num_bytes: 5908808
num_examples: 8000
- name: validation
num_bytes: 1467415
num_examples: 2000
download_size: 0
dataset_size: 14752446
- config_name: linguistic_mappings_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1325186
num_examples: 15527
- name: train
num_bytes: 1060088
num_examples: 12426
- name: validation
num_bytes: 265098
num_examples: 3101
download_size: 0
dataset_size: 2650372
- config_name: linguistics_puzzles_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1746024
num_examples: 2000
- name: train
num_bytes: 1398113
num_examples: 1600
- name: validation
num_bytes: 347911
num_examples: 400
download_size: 0
dataset_size: 3492048
- config_name: list_functions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2678136
num_examples: 10750
- name: train
num_bytes: 2161065
num_examples: 8700
- name: validation
num_bytes: 517071
num_examples: 2050
download_size: 0
dataset_size: 5356272
- config_name: logic_grid_puzzle_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1456218
num_examples: 1000
- name: train
num_bytes: 1160137
num_examples: 800
- name: validation
num_bytes: 296081
num_examples: 200
download_size: 0
dataset_size: 2912436
- config_name: logical_args_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 43582
num_examples: 32
- name: train
num_bytes: 21072
num_examples: 16
- name: validation
num_bytes: 22510
num_examples: 16
download_size: 0
dataset_size: 87164
- config_name: logical_deduction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1056716
num_examples: 1500
- name: train
num_bytes: 841788
num_examples: 1200
- name: validation
num_bytes: 214928
num_examples: 300
download_size: 0
dataset_size: 2113432
- config_name: logical_fallacy_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 720286
num_examples: 2800
- name: train
num_bytes: 576295
num_examples: 2240
- name: validation
num_bytes: 143991
num_examples: 560
download_size: 0
dataset_size: 1440572
- config_name: logical_sequence_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22722
num_examples: 39
- name: train
num_bytes: 12648
num_examples: 23
- name: validation
num_bytes: 10074
num_examples: 16
download_size: 8660
dataset_size: 45444
- config_name: mathematical_induction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 19018
num_examples: 69
- name: train
num_bytes: 14983
num_examples: 53
- name: validation
num_bytes: 4035
num_examples: 16
download_size: 22560
dataset_size: 38036
- config_name: matrixshapes_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1130574
num_examples: 4462
- name: train
num_bytes: 906061
num_examples: 3570
- name: validation
num_bytes: 224513
num_examples: 892
download_size: 436030
dataset_size: 2261148
- config_name: metaphor_boolean_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 213848
num_examples: 680
- name: train
num_bytes: 170765
num_examples: 544
- name: validation
num_bytes: 43083
num_examples: 136
download_size: 102463
dataset_size: 427696
- config_name: metaphor_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 200862
num_examples: 234
- name: train
num_bytes: 162101
num_examples: 188
- name: validation
num_bytes: 38761
num_examples: 46
download_size: 137229
dataset_size: 401724
- config_name: minute_mysteries_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3245190
num_examples: 477
- name: train
num_bytes: 2623703
num_examples: 383
- name: validation
num_bytes: 621487
num_examples: 94
download_size: 3955073
dataset_size: 6490380
- config_name: misconceptions_russian_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 16991
num_examples: 49
- name: train
num_bytes: 10970
num_examples: 33
- name: validation
num_bytes: 6021
num_examples: 16
download_size: 29961
dataset_size: 33982
- config_name: misconceptions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 45816
num_examples: 219
- name: train
num_bytes: 37246
num_examples: 176
- name: validation
num_bytes: 8570
num_examples: 43
download_size: 41069
dataset_size: 91632
- config_name: mnist_ascii_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 61739808
num_examples: 69984
- name: train
num_bytes: 49419928
num_examples: 55988
- name: validation
num_bytes: 12319880
num_examples: 13996
download_size: 20997609
dataset_size: 123479616
- config_name: modified_arithmetic_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1220993
num_examples: 6000
- name: train
num_bytes: 976859
num_examples: 4800
- name: validation
num_bytes: 244134
num_examples: 1200
download_size: 947542
dataset_size: 2441986
- config_name: moral_permissibility_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 162068
num_examples: 342
- name: train
num_bytes: 128790
num_examples: 274
- name: validation
num_bytes: 33278
num_examples: 68
download_size: 80450
dataset_size: 324136
- config_name: movie_dialog_same_or_different_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 28645997
num_examples: 50000
- name: train
num_bytes: 22889061
num_examples: 40000
- name: validation
num_bytes: 5756936
num_examples: 10000
download_size: 19923333
dataset_size: 57291994
- config_name: movie_recommendation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 173557
num_examples: 500
- name: train
num_bytes: 138936
num_examples: 400
- name: validation
num_bytes: 34621
num_examples: 100
download_size: 151639
dataset_size: 347114
- config_name: mult_data_wrangling_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 625422
num_examples: 7854
- name: train
num_bytes: 507838
num_examples: 6380
- name: validation
num_bytes: 117584
num_examples: 1474
download_size: 260725
dataset_size: 1250844
- config_name: multiemo_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 650173925
num_examples: 1437281
- name: train
num_bytes: 520172185
num_examples: 1149873
- name: validation
num_bytes: 130001740
num_examples: 287408
download_size: 453005461
dataset_size: 1300347850
- config_name: natural_instructions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 355938370
num_examples: 193250
- name: train
num_bytes: 284920096
num_examples: 154615
- name: validation
num_bytes: 71018274
num_examples: 38635
download_size: 200522980
dataset_size: 711876740
configs:
- config_name: abstract_narrative_understanding_zero_shot
data_files:
- split: default
path: abstract_narrative_understanding_zero_shot/default-*
- split: train
path: abstract_narrative_understanding_zero_shot/train-*
- split: validation
path: abstract_narrative_understanding_zero_shot/validation-*
- config_name: anachronisms_zero_shot
data_files:
- split: default
path: anachronisms_zero_shot/default-*
- split: train
path: anachronisms_zero_shot/train-*
- split: validation
path: anachronisms_zero_shot/validation-*
- config_name: analogical_similarity_zero_shot
data_files:
- split: default
path: analogical_similarity_zero_shot/default-*
- split: train
path: analogical_similarity_zero_shot/train-*
- split: validation
path: analogical_similarity_zero_shot/validation-*
- config_name: analytic_entailment_zero_shot
data_files:
- split: default
path: analytic_entailment_zero_shot/default-*
- split: train
path: analytic_entailment_zero_shot/train-*
- split: validation
path: analytic_entailment_zero_shot/validation-*
- config_name: arithmetic_zero_shot
data_files:
- split: default
path: arithmetic_zero_shot/default-*
- split: train
path: arithmetic_zero_shot/train-*
- split: validation
path: arithmetic_zero_shot/validation-*
- config_name: ascii_word_recognition_zero_shot
data_files:
- split: default
path: ascii_word_recognition_zero_shot/default-*
- split: train
path: ascii_word_recognition_zero_shot/train-*
- split: validation
path: ascii_word_recognition_zero_shot/validation-*
- config_name: authorship_verification_zero_shot
data_files:
- split: default
path: authorship_verification_zero_shot/default-*
- split: train
path: authorship_verification_zero_shot/train-*
- split: validation
path: authorship_verification_zero_shot/validation-*
- config_name: auto_categorization_zero_shot
data_files:
- split: default
path: auto_categorization_zero_shot/default-*
- split: train
path: auto_categorization_zero_shot/train-*
- split: validation
path: auto_categorization_zero_shot/validation-*
- config_name: auto_debugging_zero_shot
data_files:
- split: default
path: auto_debugging_zero_shot/default-*
- split: train
path: auto_debugging_zero_shot/train-*
- split: validation
path: auto_debugging_zero_shot/validation-*
- config_name: bbq_lite_json_zero_shot
data_files:
- split: default
path: bbq_lite_json_zero_shot/default-*
- split: train
path: bbq_lite_json_zero_shot/train-*
- split: validation
path: bbq_lite_json_zero_shot/validation-*
- config_name: bridging_anaphora_resolution_barqa_zero_shot
data_files:
- split: default
path: bridging_anaphora_resolution_barqa_zero_shot/default-*
- split: train
path: bridging_anaphora_resolution_barqa_zero_shot/train-*
- split: validation
path: bridging_anaphora_resolution_barqa_zero_shot/validation-*
- config_name: causal_judgment_zero_shot
data_files:
- split: default
path: causal_judgment_zero_shot/default-*
- split: train
path: causal_judgment_zero_shot/train-*
- split: validation
path: causal_judgment_zero_shot/validation-*
- config_name: cause_and_effect_zero_shot
data_files:
- split: default
path: cause_and_effect_zero_shot/default-*
- split: train
path: cause_and_effect_zero_shot/train-*
- split: validation
path: cause_and_effect_zero_shot/validation-*
- config_name: checkmate_in_one_zero_shot
data_files:
- split: default
path: checkmate_in_one_zero_shot/default-*
- split: train
path: checkmate_in_one_zero_shot/train-*
- split: validation
path: checkmate_in_one_zero_shot/validation-*
- config_name: chess_state_tracking_zero_shot
data_files:
- split: default
path: chess_state_tracking_zero_shot/default-*
- split: train
path: chess_state_tracking_zero_shot/train-*
- split: validation
path: chess_state_tracking_zero_shot/validation-*
- config_name: chinese_remainder_theorem_zero_shot
data_files:
- split: default
path: chinese_remainder_theorem_zero_shot/default-*
- split: train
path: chinese_remainder_theorem_zero_shot/train-*
- split: validation
path: chinese_remainder_theorem_zero_shot/validation-*
- config_name: cifar10_classification_zero_shot
data_files:
- split: default
path: cifar10_classification_zero_shot/default-*
- split: train
path: cifar10_classification_zero_shot/train-*
- split: validation
path: cifar10_classification_zero_shot/validation-*
- config_name: code_line_description_zero_shot
data_files:
- split: default
path: code_line_description_zero_shot/default-*
- split: train
path: code_line_description_zero_shot/train-*
- split: validation
path: code_line_description_zero_shot/validation-*
- config_name: codenames_zero_shot
data_files:
- split: default
path: codenames_zero_shot/default-*
- split: train
path: codenames_zero_shot/train-*
- split: validation
path: codenames_zero_shot/validation-*
- config_name: color_zero_shot
data_files:
- split: default
path: color_zero_shot/default-*
- split: train
path: color_zero_shot/train-*
- split: validation
path: color_zero_shot/validation-*
- config_name: common_morpheme_zero_shot
data_files:
- split: default
path: common_morpheme_zero_shot/default-*
- split: train
path: common_morpheme_zero_shot/train-*
- split: validation
path: common_morpheme_zero_shot/validation-*
- config_name: conceptual_combinations_zero_shot
data_files:
- split: default
path: conceptual_combinations_zero_shot/default-*
- split: train
path: conceptual_combinations_zero_shot/train-*
- split: validation
path: conceptual_combinations_zero_shot/validation-*
- config_name: conlang_translation_zero_shot
data_files:
- split: default
path: conlang_translation_zero_shot/default-*
- split: train
path: conlang_translation_zero_shot/train-*
- split: validation
path: conlang_translation_zero_shot/validation-*
- config_name: contextual_parametric_knowledge_conflicts_zero_shot
data_files:
- split: default
path: contextual_parametric_knowledge_conflicts_zero_shot/default-*
- split: train
path: contextual_parametric_knowledge_conflicts_zero_shot/train-*
- split: validation
path: contextual_parametric_knowledge_conflicts_zero_shot/validation-*
- config_name: crash_blossom_zero_shot
data_files:
- split: default
path: crash_blossom_zero_shot/default-*
- split: train
path: crash_blossom_zero_shot/train-*
- split: validation
path: crash_blossom_zero_shot/validation-*
- config_name: crass_ai_zero_shot
data_files:
- split: default
path: crass_ai_zero_shot/default-*
- split: train
path: crass_ai_zero_shot/train-*
- split: validation
path: crass_ai_zero_shot/validation-*
- config_name: cryobiology_spanish_zero_shot
data_files:
- split: default
path: cryobiology_spanish_zero_shot/default-*
- split: train
path: cryobiology_spanish_zero_shot/train-*
- split: validation
path: cryobiology_spanish_zero_shot/validation-*
- config_name: cryptonite_zero_shot
data_files:
- split: default
path: cryptonite_zero_shot/default-*
- split: train
path: cryptonite_zero_shot/train-*
- split: validation
path: cryptonite_zero_shot/validation-*
- config_name: cs_algorithms_zero_shot
data_files:
- split: default
path: cs_algorithms_zero_shot/default-*
- split: train
path: cs_algorithms_zero_shot/train-*
- split: validation
path: cs_algorithms_zero_shot/validation-*
- config_name: dark_humor_detection_zero_shot
data_files:
- split: default
path: dark_humor_detection_zero_shot/default-*
- split: train
path: dark_humor_detection_zero_shot/train-*
- split: validation
path: dark_humor_detection_zero_shot/validation-*
- config_name: date_understanding_zero_shot
data_files:
- split: default
path: date_understanding_zero_shot/default-*
- split: train
path: date_understanding_zero_shot/train-*
- split: validation
path: date_understanding_zero_shot/validation-*
- config_name: disambiguation_qa_zero_shot
data_files:
- split: default
path: disambiguation_qa_zero_shot/default-*
- split: train
path: disambiguation_qa_zero_shot/train-*
- split: validation
path: disambiguation_qa_zero_shot/validation-*
- config_name: discourse_marker_prediction_zero_shot
data_files:
- split: default
path: discourse_marker_prediction_zero_shot/default-*
- split: train
path: discourse_marker_prediction_zero_shot/train-*
- split: validation
path: discourse_marker_prediction_zero_shot/validation-*
- config_name: disfl_qa_zero_shot
data_files:
- split: default
path: disfl_qa_zero_shot/default-*
- split: train
path: disfl_qa_zero_shot/train-*
- split: validation
path: disfl_qa_zero_shot/validation-*
- config_name: dyck_languages_zero_shot
data_files:
- split: default
path: dyck_languages_zero_shot/default-*
- split: train
path: dyck_languages_zero_shot/train-*
- split: validation
path: dyck_languages_zero_shot/validation-*
- config_name: elementary_math_qa_zero_shot
data_files:
- split: default
path: elementary_math_qa_zero_shot/default-*
- split: train
path: elementary_math_qa_zero_shot/train-*
- split: validation
path: elementary_math_qa_zero_shot/validation-*
- config_name: emoji_movie_zero_shot
data_files:
- split: default
path: emoji_movie_zero_shot/default-*
- split: train
path: emoji_movie_zero_shot/train-*
- split: validation
path: emoji_movie_zero_shot/validation-*
- config_name: emojis_emotion_prediction_zero_shot
data_files:
- split: default
path: emojis_emotion_prediction_zero_shot/default-*
- split: train
path: emojis_emotion_prediction_zero_shot/train-*
- split: validation
path: emojis_emotion_prediction_zero_shot/validation-*
- config_name: empirical_judgments_zero_shot
data_files:
- split: default
path: empirical_judgments_zero_shot/default-*
- split: train
path: empirical_judgments_zero_shot/train-*
- split: validation
path: empirical_judgments_zero_shot/validation-*
- config_name: english_proverbs_zero_shot
data_files:
- split: default
path: english_proverbs_zero_shot/default-*
- split: train
path: english_proverbs_zero_shot/train-*
- split: validation
path: english_proverbs_zero_shot/validation-*
- config_name: english_russian_proverbs_zero_shot
data_files:
- split: default
path: english_russian_proverbs_zero_shot/default-*
- split: train
path: english_russian_proverbs_zero_shot/train-*
- split: validation
path: english_russian_proverbs_zero_shot/validation-*
- config_name: entailed_polarity_hindi_zero_shot
data_files:
- split: default
path: entailed_polarity_hindi_zero_shot/default-*
- split: train
path: entailed_polarity_hindi_zero_shot/train-*
- split: validation
path: entailed_polarity_hindi_zero_shot/validation-*
- config_name: entailed_polarity_zero_shot
data_files:
- split: default
path: entailed_polarity_zero_shot/default-*
- split: train
path: entailed_polarity_zero_shot/train-*
- split: validation
path: entailed_polarity_zero_shot/validation-*
- config_name: epistemic_reasoning_zero_shot
data_files:
- split: default
path: epistemic_reasoning_zero_shot/default-*
- split: train
path: epistemic_reasoning_zero_shot/train-*
- split: validation
path: epistemic_reasoning_zero_shot/validation-*
- config_name: evaluating_information_essentiality_zero_shot
data_files:
- split: default
path: evaluating_information_essentiality_zero_shot/default-*
- split: train
path: evaluating_information_essentiality_zero_shot/train-*
- split: validation
path: evaluating_information_essentiality_zero_shot/validation-*
- config_name: fact_checker_zero_shot
data_files:
- split: default
path: fact_checker_zero_shot/default-*
- split: train
path: fact_checker_zero_shot/train-*
- split: validation
path: fact_checker_zero_shot/validation-*
- config_name: fantasy_reasoning_zero_shot
data_files:
- split: default
path: fantasy_reasoning_zero_shot/default-*
- split: train
path: fantasy_reasoning_zero_shot/train-*
- split: validation
path: fantasy_reasoning_zero_shot/validation-*
- config_name: few_shot_nlg_zero_shot
data_files:
- split: default
path: few_shot_nlg_zero_shot/default-*
- split: train
path: few_shot_nlg_zero_shot/train-*
- split: validation
path: few_shot_nlg_zero_shot/validation-*
- config_name: figure_of_speech_detection_zero_shot
data_files:
- split: default
path: figure_of_speech_detection_zero_shot/default-*
- split: train
path: figure_of_speech_detection_zero_shot/train-*
- split: validation
path: figure_of_speech_detection_zero_shot/validation-*
- config_name: formal_fallacies_syllogisms_negation_zero_shot
data_files:
- split: default
path: formal_fallacies_syllogisms_negation_zero_shot/default-*
- split: train
path: formal_fallacies_syllogisms_negation_zero_shot/train-*
- split: validation
path: formal_fallacies_syllogisms_negation_zero_shot/validation-*
- config_name: gem_zero_shot
data_files:
- split: default
path: gem_zero_shot/default-*
- split: train
path: gem_zero_shot/train-*
- split: validation
path: gem_zero_shot/validation-*
- config_name: gender_inclusive_sentences_german_zero_shot
data_files:
- split: default
path: gender_inclusive_sentences_german_zero_shot/default-*
- split: train
path: gender_inclusive_sentences_german_zero_shot/train-*
- split: validation
path: gender_inclusive_sentences_german_zero_shot/validation-*
- config_name: general_knowledge_zero_shot
data_files:
- split: default
path: general_knowledge_zero_shot/default-*
- split: train
path: general_knowledge_zero_shot/train-*
- split: validation
path: general_knowledge_zero_shot/validation-*
- config_name: geometric_shapes_zero_shot
data_files:
- split: default
path: geometric_shapes_zero_shot/default-*
- split: train
path: geometric_shapes_zero_shot/train-*
- split: validation
path: geometric_shapes_zero_shot/validation-*
- config_name: goal_step_wikihow_zero_shot
data_files:
- split: default
path: goal_step_wikihow_zero_shot/default-*
- split: train
path: goal_step_wikihow_zero_shot/train-*
- split: validation
path: goal_step_wikihow_zero_shot/validation-*
- config_name: gre_reading_comprehension_zero_shot
data_files:
- split: default
path: gre_reading_comprehension_zero_shot/default-*
- split: train
path: gre_reading_comprehension_zero_shot/train-*
- split: validation
path: gre_reading_comprehension_zero_shot/validation-*
- config_name: hhh_alignment_zero_shot
data_files:
- split: default
path: hhh_alignment_zero_shot/default-*
- split: train
path: hhh_alignment_zero_shot/train-*
- split: validation
path: hhh_alignment_zero_shot/validation-*
- config_name: hindi_question_answering_zero_shot
data_files:
- split: default
path: hindi_question_answering_zero_shot/default-*
- split: train
path: hindi_question_answering_zero_shot/train-*
- split: validation
path: hindi_question_answering_zero_shot/validation-*
- config_name: hindu_knowledge_zero_shot
data_files:
- split: default
path: hindu_knowledge_zero_shot/default-*
- split: train
path: hindu_knowledge_zero_shot/train-*
- split: validation
path: hindu_knowledge_zero_shot/validation-*
- config_name: hinglish_toxicity_zero_shot
data_files:
- split: default
path: hinglish_toxicity_zero_shot/default-*
- split: train
path: hinglish_toxicity_zero_shot/train-*
- split: validation
path: hinglish_toxicity_zero_shot/validation-*
- config_name: human_organs_senses_zero_shot
data_files:
- split: default
path: human_organs_senses_zero_shot/default-*
- split: train
path: human_organs_senses_zero_shot/train-*
- split: validation
path: human_organs_senses_zero_shot/validation-*
- config_name: hyperbaton_zero_shot
data_files:
- split: default
path: hyperbaton_zero_shot/default-*
- split: train
path: hyperbaton_zero_shot/train-*
- split: validation
path: hyperbaton_zero_shot/validation-*
- config_name: identify_math_theorems_zero_shot
data_files:
- split: default
path: identify_math_theorems_zero_shot/default-*
- split: train
path: identify_math_theorems_zero_shot/train-*
- split: validation
path: identify_math_theorems_zero_shot/validation-*
- config_name: identify_odd_metaphor_zero_shot
data_files:
- split: default
path: identify_odd_metaphor_zero_shot/default-*
- split: train
path: identify_odd_metaphor_zero_shot/train-*
- split: validation
path: identify_odd_metaphor_zero_shot/validation-*
- config_name: implicatures_zero_shot
data_files:
- split: default
path: implicatures_zero_shot/default-*
- split: train
path: implicatures_zero_shot/train-*
- split: validation
path: implicatures_zero_shot/validation-*
- config_name: implicit_relations_zero_shot
data_files:
- split: default
path: implicit_relations_zero_shot/default-*
- split: train
path: implicit_relations_zero_shot/train-*
- split: validation
path: implicit_relations_zero_shot/validation-*
- config_name: intent_recognition_zero_shot
data_files:
- split: default
path: intent_recognition_zero_shot/default-*
- split: train
path: intent_recognition_zero_shot/train-*
- split: validation
path: intent_recognition_zero_shot/validation-*
- config_name: international_phonetic_alphabet_nli_zero_shot
data_files:
- split: default
path: international_phonetic_alphabet_nli_zero_shot/default-*
- split: train
path: international_phonetic_alphabet_nli_zero_shot/train-*
- split: validation
path: international_phonetic_alphabet_nli_zero_shot/validation-*
- config_name: international_phonetic_alphabet_transliterate_zero_shot
data_files:
- split: default
path: international_phonetic_alphabet_transliterate_zero_shot/default-*
- split: train
path: international_phonetic_alphabet_transliterate_zero_shot/train-*
- split: validation
path: international_phonetic_alphabet_transliterate_zero_shot/validation-*
- config_name: intersect_geometry_zero_shot
data_files:
- split: default
path: intersect_geometry_zero_shot/default-*
- split: train
path: intersect_geometry_zero_shot/train-*
- split: validation
path: intersect_geometry_zero_shot/validation-*
- config_name: irony_identification_zero_shot
data_files:
- split: default
path: irony_identification_zero_shot/default-*
- split: train
path: irony_identification_zero_shot/train-*
- split: validation
path: irony_identification_zero_shot/validation-*
- config_name: kanji_ascii_zero_shot
data_files:
- split: default
path: kanji_ascii_zero_shot/default-*
- split: train
path: kanji_ascii_zero_shot/train-*
- split: validation
path: kanji_ascii_zero_shot/validation-*
- config_name: kannada_zero_shot
data_files:
- split: default
path: kannada_zero_shot/default-*
- split: train
path: kannada_zero_shot/train-*
- split: validation
path: kannada_zero_shot/validation-*
- config_name: key_value_maps_zero_shot
data_files:
- split: default
path: key_value_maps_zero_shot/default-*
- split: train
path: key_value_maps_zero_shot/train-*
- split: validation
path: key_value_maps_zero_shot/validation-*
- config_name: known_unknowns_zero_shot
data_files:
- split: default
path: known_unknowns_zero_shot/default-*
- split: train
path: known_unknowns_zero_shot/train-*
- split: validation
path: known_unknowns_zero_shot/validation-*
- config_name: language_games_zero_shot
data_files:
- split: default
path: language_games_zero_shot/default-*
- split: train
path: language_games_zero_shot/train-*
- split: validation
path: language_games_zero_shot/validation-*
- config_name: language_identification_zero_shot
data_files:
- split: default
path: language_identification_zero_shot/default-*
- split: train
path: language_identification_zero_shot/train-*
- split: validation
path: language_identification_zero_shot/validation-*
- config_name: linguistic_mappings_zero_shot
data_files:
- split: default
path: linguistic_mappings_zero_shot/default-*
- split: train
path: linguistic_mappings_zero_shot/train-*
- split: validation
path: linguistic_mappings_zero_shot/validation-*
- config_name: linguistics_puzzles_zero_shot
data_files:
- split: default
path: linguistics_puzzles_zero_shot/default-*
- split: train
path: linguistics_puzzles_zero_shot/train-*
- split: validation
path: linguistics_puzzles_zero_shot/validation-*
- config_name: list_functions_zero_shot
data_files:
- split: default
path: list_functions_zero_shot/default-*
- split: train
path: list_functions_zero_shot/train-*
- split: validation
path: list_functions_zero_shot/validation-*
- config_name: logic_grid_puzzle_zero_shot
data_files:
- split: default
path: logic_grid_puzzle_zero_shot/default-*
- split: train
path: logic_grid_puzzle_zero_shot/train-*
- split: validation
path: logic_grid_puzzle_zero_shot/validation-*
- config_name: logical_args_zero_shot
data_files:
- split: default
path: logical_args_zero_shot/default-*
- split: train
path: logical_args_zero_shot/train-*
- split: validation
path: logical_args_zero_shot/validation-*
- config_name: logical_deduction_zero_shot
data_files:
- split: default
path: logical_deduction_zero_shot/default-*
- split: train
path: logical_deduction_zero_shot/train-*
- split: validation
path: logical_deduction_zero_shot/validation-*
- config_name: logical_fallacy_detection_zero_shot
data_files:
- split: default
path: logical_fallacy_detection_zero_shot/default-*
- split: train
path: logical_fallacy_detection_zero_shot/train-*
- split: validation
path: logical_fallacy_detection_zero_shot/validation-*
- config_name: logical_sequence_zero_shot
data_files:
- split: default
path: logical_sequence_zero_shot/default-*
- split: train
path: logical_sequence_zero_shot/train-*
- split: validation
path: logical_sequence_zero_shot/validation-*
- config_name: mathematical_induction_zero_shot
data_files:
- split: default
path: mathematical_induction_zero_shot/default-*
- split: train
path: mathematical_induction_zero_shot/train-*
- split: validation
path: mathematical_induction_zero_shot/validation-*
- config_name: matrixshapes_zero_shot
data_files:
- split: default
path: matrixshapes_zero_shot/default-*
- split: train
path: matrixshapes_zero_shot/train-*
- split: validation
path: matrixshapes_zero_shot/validation-*
- config_name: metaphor_boolean_zero_shot
data_files:
- split: default
path: metaphor_boolean_zero_shot/default-*
- split: train
path: metaphor_boolean_zero_shot/train-*
- split: validation
path: metaphor_boolean_zero_shot/validation-*
- config_name: metaphor_understanding_zero_shot
data_files:
- split: default
path: metaphor_understanding_zero_shot/default-*
- split: train
path: metaphor_understanding_zero_shot/train-*
- split: validation
path: metaphor_understanding_zero_shot/validation-*
- config_name: minute_mysteries_qa_zero_shot
data_files:
- split: default
path: minute_mysteries_qa_zero_shot/default-*
- split: train
path: minute_mysteries_qa_zero_shot/train-*
- split: validation
path: minute_mysteries_qa_zero_shot/validation-*
- config_name: misconceptions_russian_zero_shot
data_files:
- split: default
path: misconceptions_russian_zero_shot/default-*
- split: train
path: misconceptions_russian_zero_shot/train-*
- split: validation
path: misconceptions_russian_zero_shot/validation-*
- config_name: misconceptions_zero_shot
data_files:
- split: default
path: misconceptions_zero_shot/default-*
- split: train
path: misconceptions_zero_shot/train-*
- split: validation
path: misconceptions_zero_shot/validation-*
- config_name: mnist_ascii_zero_shot
data_files:
- split: default
path: mnist_ascii_zero_shot/default-*
- split: train
path: mnist_ascii_zero_shot/train-*
- split: validation
path: mnist_ascii_zero_shot/validation-*
- config_name: modified_arithmetic_zero_shot
data_files:
- split: default
path: modified_arithmetic_zero_shot/default-*
- split: train
path: modified_arithmetic_zero_shot/train-*
- split: validation
path: modified_arithmetic_zero_shot/validation-*
- config_name: moral_permissibility_zero_shot
data_files:
- split: default
path: moral_permissibility_zero_shot/default-*
- split: train
path: moral_permissibility_zero_shot/train-*
- split: validation
path: moral_permissibility_zero_shot/validation-*
- config_name: movie_dialog_same_or_different_zero_shot
data_files:
- split: default
path: movie_dialog_same_or_different_zero_shot/default-*
- split: train
path: movie_dialog_same_or_different_zero_shot/train-*
- split: validation
path: movie_dialog_same_or_different_zero_shot/validation-*
- config_name: movie_recommendation_zero_shot
data_files:
- split: default
path: movie_recommendation_zero_shot/default-*
- split: train
path: movie_recommendation_zero_shot/train-*
- split: validation
path: movie_recommendation_zero_shot/validation-*
- config_name: mult_data_wrangling_zero_shot
data_files:
- split: default
path: mult_data_wrangling_zero_shot/default-*
- split: train
path: mult_data_wrangling_zero_shot/train-*
- split: validation
path: mult_data_wrangling_zero_shot/validation-*
- config_name: multiemo_zero_shot
data_files:
- split: default
path: multiemo_zero_shot/default-*
- split: train
path: multiemo_zero_shot/train-*
- split: validation
path: multiemo_zero_shot/validation-*
- config_name: natural_instructions_zero_shot
data_files:
- split: default
path: natural_instructions_zero_shot/default-*
- split: train
path: natural_instructions_zero_shot/train-*
- split: validation
path: natural_instructions_zero_shot/validation-*
---
# Dataset Card for "bigbench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
King-Harry/NinjaMasker-PII-Redaction-Dataset | 2023-10-04T15:22:51.000Z | [
"license:apache-2.0",
"region:us"
] | King-Harry | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
umarigan/turkish_corpus | 2023-10-04T19:09:07.000Z | [
"region:us"
] | umarigan | null | null | null | 0 | 7 | Entry not found |
ishannbx/arXiv-one-shot-classification-l27b-E02-large-b05 | 2023-10-05T05:14:37.000Z | [
"license:mit",
"region:us"
] | ishannbx | null | null | null | 0 | 7 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 3103900
num_examples: 467
- name: test
num_bytes: 780031
num_examples: 117
download_size: 654972
dataset_size: 3883931
---
|
atg456/legal_data_hf | 2023-10-05T12:39:38.000Z | [
"region:us"
] | atg456 | null | null | null | 0 | 7 | Entry not found |
shengqin/web-attacks-old | 2023-10-05T15:38:36.000Z | [
"region:us"
] | shengqin | null | null | null | 0 | 7 | Entry not found |
Talelaw/fnghb | 2023-10-06T05:16:15.000Z | [
"license:eupl-1.1",
"region:us"
] | Talelaw | null | null | null | 0 | 7 | ---
license: eupl-1.1
---
|
Falah/fantasy_animal_prompts | 2023-10-06T06:41:41.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2645706
num_examples: 10000
download_size: 335130
dataset_size: 2645706
---
# Dataset Card for "fantasy_animal_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_eval_DA_tokenized | 2023-10-06T10:19:45.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 23064510
num_examples: 24160
download_size: 5097845
dataset_size: 23064510
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_eval_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TheAIchemist13/beekeeping_tech_hi | 2023-10-06T11:02:47.000Z | [
"region:us"
] | TheAIchemist13 | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 4605091.0
num_examples: 110
- name: test
num_bytes: 1616943.0
num_examples: 40
download_size: 6141646
dataset_size: 6222034.0
---
# Dataset Card for "beekeeping_tech_hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlplabtdtu/university-dataset | 2023-10-06T18:09:17.000Z | [
"region:us"
] | nlplabtdtu | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1032712459
num_examples: 213847
download_size: 389863864
dataset_size: 1032712459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "university-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CWKSC/common_voice_11_0-hi-whisper-small | 2023-10-07T06:44:04.000Z | [
"region:us"
] | CWKSC | null | null | null | 1 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6283293032
num_examples: 6540
- name: test
num_bytes: 2780330000
num_examples: 2894
download_size: 0
dataset_size: 9063623032
---
# Dataset Card for "common_voice_11_0-hi-whisper-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_eng_DA3_tokenized | 2023-10-07T10:59:35.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 97253830
num_examples: 138200
download_size: 22040467
dataset_size: 97253830
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_eng_DA3_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
infCapital/financial_phrasebank_en | 2023-10-07T15:52:46.000Z | [
"region:us"
] | infCapital | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2048295
num_examples: 14780
download_size: 1185669
dataset_size: 2048295
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "financial_phrasebank_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/programming_book_cover_prompts | 2023-10-08T09:00:51.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 191332
num_examples: 1000
download_size: 24579
dataset_size: 191332
---
# Dataset Card for "programming_book_cover_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lollitor/SMILES10M | 2023-10-09T11:03:23.000Z | [
"region:us"
] | Lollitor | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1098769008
num_examples: 10000000
download_size: 434321680
dataset_size: 1098769008
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SMILES10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kowndinya23/t0-submix-mistral-512 | 2023-10-08T15:06:32.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype:
class_label:
names:
'0': adversarial_qa_dbert_answer_the_following_q
'1': adversarial_qa_dbert_based_on
'2': adversarial_qa_dbert_generate_question
'3': adversarial_qa_dbert_question_context_answer
'4': adversarial_qa_dbert_tell_what_it_is
'5': adversarial_qa_dbidaf_answer_the_following_q
'6': adversarial_qa_dbidaf_based_on
'7': adversarial_qa_dbidaf_generate_question
'8': adversarial_qa_dbidaf_question_context_answer
'9': adversarial_qa_dbidaf_tell_what_it_is
'10': adversarial_qa_droberta_answer_the_following_q
'11': adversarial_qa_droberta_based_on
'12': adversarial_qa_droberta_generate_question
'13': adversarial_qa_droberta_question_context_answer
'14': adversarial_qa_droberta_tell_what_it_is
'15': amazon_polarity_Is_this_product_review_positive
'16': amazon_polarity_Is_this_review
'17': amazon_polarity_Is_this_review_negative
'18': amazon_polarity_User_recommend_this_product
'19': amazon_polarity_convey_negative_or_positive_sentiment
'20': amazon_polarity_flattering_or_not
'21': amazon_polarity_negative_or_positive_tone
'22': amazon_polarity_user_satisfied
'23': amazon_polarity_would_you_buy
'24': app_reviews_categorize_rating_using_review
'25': app_reviews_convert_to_rating
'26': app_reviews_convert_to_star_rating
'27': app_reviews_generate_review
'28': cos_e_v1.11_aligned_with_common_sense
'29': cos_e_v1.11_description_question_option_id
'30': cos_e_v1.11_description_question_option_text
'31': cos_e_v1.11_explain_why_human
'32': cos_e_v1.11_generate_explanation_given_text
'33': cos_e_v1.11_i_think
'34': cos_e_v1.11_question_description_option_id
'35': cos_e_v1.11_question_description_option_text
'36': cos_e_v1.11_question_option_description_id
'37': cos_e_v1.11_question_option_description_text
'38': cos_e_v1.11_rationale
'39': dbpedia_14_given_a_choice_of_categories_
'40': dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to
'41': dbpedia_14_given_list_what_category_does_the_paragraph_belong_to
'42': dbpedia_14_pick_one_category_for_the_following_text
'43': dream_answer_to_dialogue
'44': dream_baseline
'45': dream_generate_first_utterance
'46': dream_generate_last_utterance
'47': dream_read_the_following_conversation_and_answer_the_question
'48': duorc_ParaphraseRC_answer_question
'49': duorc_ParaphraseRC_build_story_around_qa
'50': duorc_ParaphraseRC_decide_worth_it
'51': duorc_ParaphraseRC_extract_answer
'52': duorc_ParaphraseRC_generate_question
'53': duorc_ParaphraseRC_generate_question_by_answer
'54': duorc_ParaphraseRC_movie_director
'55': duorc_ParaphraseRC_question_answering
'56': duorc_ParaphraseRC_title_generation
'57': duorc_SelfRC_answer_question
'58': duorc_SelfRC_build_story_around_qa
'59': duorc_SelfRC_decide_worth_it
'60': duorc_SelfRC_extract_answer
'61': duorc_SelfRC_generate_question
'62': duorc_SelfRC_generate_question_by_answer
'63': duorc_SelfRC_movie_director
'64': duorc_SelfRC_question_answering
'65': duorc_SelfRC_title_generation
'66': kilt_tasks_hotpotqa_combining_facts
'67': kilt_tasks_hotpotqa_complex_question
'68': kilt_tasks_hotpotqa_final_exam
'69': kilt_tasks_hotpotqa_formulate
'70': kilt_tasks_hotpotqa_straighforward_qa
'71': qasc_is_correct_1
'72': qasc_is_correct_2
'73': qasc_qa_with_combined_facts_1
'74': qasc_qa_with_separated_facts_1
'75': qasc_qa_with_separated_facts_2
'76': qasc_qa_with_separated_facts_3
'77': qasc_qa_with_separated_facts_4
'78': qasc_qa_with_separated_facts_5
'79': quail_context_description_question_answer_id
'80': quail_context_description_question_answer_text
'81': quail_context_description_question_text
'82': quail_context_question_answer_description_id
'83': quail_context_question_answer_description_text
'84': quail_context_question_description_answer_id
'85': quail_context_question_description_answer_text
'86': quail_context_question_description_text
'87': quail_description_context_question_answer_id
'88': quail_description_context_question_answer_text
'89': quail_description_context_question_text
'90': quail_no_prompt_id
'91': quail_no_prompt_text
'92': quarel_choose_between
'93': quarel_do_not_use
'94': quarel_heres_a_story
'95': quarel_logic_test
'96': quarel_testing_students
'97': quartz_answer_question_based_on
'98': quartz_answer_question_below
'99': quartz_given_the_fact_answer_the_q
'100': quartz_having_read_above_passage
'101': quartz_paragraph_question_plain_concat
'102': quartz_read_passage_below_choose
'103': quartz_use_info_from_paragraph_question
'104': quartz_use_info_from_question_paragraph
'105': quoref_Answer_Friend_Question
'106': quoref_Answer_Question_Given_Context
'107': quoref_Answer_Test
'108': quoref_Context_Contains_Answer
'109': quoref_Find_Answer
'110': quoref_Found_Context_Online
'111': quoref_Given_Context_Answer_Question
'112': quoref_Guess_Answer
'113': quoref_Guess_Title_For_Context
'114': quoref_Read_And_Extract_
'115': quoref_What_Is_The_Answer
'116': race_high_Is_this_the_right_answer
'117': race_high_Read_the_article_and_answer_the_question_no_option_
'118': race_high_Select_the_best_answer
'119': race_high_Select_the_best_answer_generate_span_
'120': race_high_Select_the_best_answer_no_instructions_
'121': race_high_Taking_a_test
'122': race_high_Write_a_multi_choice_question_for_the_following_article
'123': race_high_Write_a_multi_choice_question_options_given_
'124': race_middle_Is_this_the_right_answer
'125': race_middle_Read_the_article_and_answer_the_question_no_option_
'126': race_middle_Select_the_best_answer
'127': race_middle_Select_the_best_answer_generate_span_
'128': race_middle_Select_the_best_answer_no_instructions_
'129': race_middle_Taking_a_test
'130': race_middle_Write_a_multi_choice_question_for_the_following_article
'131': race_middle_Write_a_multi_choice_question_options_given_
'132': ropes_background_new_situation_answer
'133': ropes_background_situation_middle
'134': ropes_given_background_situation
'135': ropes_new_situation_background_answer
'136': ropes_plain_background_situation
'137': ropes_plain_bottom_hint
'138': ropes_plain_no_background
'139': ropes_prompt_beginning
'140': ropes_prompt_bottom_hint_beginning
'141': ropes_prompt_bottom_no_hint
'142': ropes_prompt_mix
'143': ropes_read_background_situation
'144': sciq_Direct_Question
'145': sciq_Direct_Question_Closed_Book_
'146': sciq_Multiple_Choice
'147': sciq_Multiple_Choice_Closed_Book_
'148': sciq_Multiple_Choice_Question_First
'149': social_i_qa_Check_if_a_random_answer_is_valid_or_not
'150': social_i_qa_Generate_answer
'151': social_i_qa_Generate_the_question_from_the_answer
'152': social_i_qa_I_was_wondering
'153': social_i_qa_Show_choices_and_generate_answer
'154': social_i_qa_Show_choices_and_generate_index
'155': web_questions_get_the_answer
'156': web_questions_potential_correct_answer
'157': web_questions_question_answer
'158': web_questions_short_general_knowledge_q
'159': web_questions_whats_the_answer
'160': wiki_bio_comprehension
'161': wiki_bio_guess_person
'162': wiki_bio_key_content
'163': wiki_bio_what_content
'164': wiki_bio_who
'165': wiki_hop_original_choose_best_object_affirmative_1
'166': wiki_hop_original_choose_best_object_affirmative_2
'167': wiki_hop_original_choose_best_object_affirmative_3
'168': wiki_hop_original_choose_best_object_interrogative_1
'169': wiki_hop_original_choose_best_object_interrogative_2
'170': wiki_hop_original_explain_relation
'171': wiki_hop_original_generate_object
'172': wiki_hop_original_generate_subject
'173': wiki_hop_original_generate_subject_and_object
'174': wiki_qa_Decide_good_answer
'175': wiki_qa_Direct_Answer_to_Question
'176': wiki_qa_Generate_Question_from_Topic
'177': wiki_qa_Is_This_True_
'178': wiki_qa_Jeopardy_style
'179': wiki_qa_Topic_Prediction_Answer_Only
'180': wiki_qa_Topic_Prediction_Question_Only
'181': wiki_qa_Topic_Prediction_Question_and_Answer_Pair
'182': wiki_qa_automatic_system
'183': wiki_qa_exercise
'184': wiki_qa_found_on_google
'185': wiqa_does_the_supposed_perturbation_have_an_effect
'186': wiqa_effect_with_label_answer
'187': wiqa_effect_with_string_answer
'188': wiqa_what_is_the_final_step_of_the_following_process
'189': wiqa_what_is_the_missing_first_step
'190': wiqa_what_might_be_the_first_step_of_the_process
'191': wiqa_what_might_be_the_last_step_of_the_process
'192': wiqa_which_of_the_following_is_the_supposed_perturbation
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 866284853.1490041
num_examples: 901997
- name: validation
num_bytes: 8751234.850995874
num_examples: 9112
download_size: 501582309
dataset_size: 875036088.0
---
# Dataset Card for "t0-submix-mistral-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kowndinya23/niv2-submix-mistral-512 | 2023-10-08T15:50:06.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype:
class_label:
names:
'0': task001_quoref_question_generation
'1': task002_quoref_answer_generation
'2': task003_mctaco_question_generation_event_duration
'3': task004_mctaco_answer_generation_event_duration
'4': task005_mctaco_wrong_answer_generation_event_duration
'5': task006_mctaco_question_generation_transient_stationary
'6': task007_mctaco_answer_generation_transient_stationary
'7': task008_mctaco_wrong_answer_generation_transient_stationary
'8': task009_mctaco_question_generation_event_ordering
'9': task010_mctaco_answer_generation_event_ordering
'10': task011_mctaco_wrong_answer_generation_event_ordering
'11': task012_mctaco_question_generation_absolute_timepoint
'12': task013_mctaco_answer_generation_absolute_timepoint
'13': task014_mctaco_wrong_answer_generation_absolute_timepoint
'14': task015_mctaco_question_generation_frequency
'15': task016_mctaco_answer_generation_frequency
'16': task017_mctaco_wrong_answer_generation_frequency
'17': task018_mctaco_temporal_reasoning_presence
'18': task019_mctaco_temporal_reasoning_category
'19': task020_mctaco_span_based_question
'20': task021_mctaco_grammatical_logical
'21': task022_cosmosqa_passage_inappropriate_binary
'22': task023_cosmosqa_question_generation
'23': task024_cosmosqa_answer_generation
'24': task025_cosmosqa_incorrect_answer_generation
'25': task026_drop_question_generation
'26': task027_drop_answer_type_generation
'27': task028_drop_answer_generation
'28': task030_winogrande_full_person
'29': task032_winogrande_question_generation_person
'30': task033_winogrande_answer_generation
'31': task035_winogrande_question_modification_person
'32': task036_qasc_topic_word_to_generate_related_fact
'33': task037_qasc_generate_related_fact
'34': task038_qasc_combined_fact
'35': task039_qasc_find_overlapping_words
'36': task040_qasc_question_generation
'37': task041_qasc_answer_generation
'38': task042_qasc_incorrect_option_generation
'39': task043_essential_terms_answering_incomplete_questions
'40': task044_essential_terms_identifying_essential_words
'41': task045_miscellaneous_sentence_paraphrasing
'42': task047_miscellaneous_answering_science_questions
'43': task048_multirc_question_generation
'44': task049_multirc_questions_needed_to_answer
'45': task050_multirc_answerability
'46': task051_multirc_correct_answer_single_sentence
'47': task052_multirc_identify_bad_question
'48': task053_multirc_correct_bad_question
'49': task054_multirc_write_correct_answer
'50': task055_multirc_write_incorrect_answer
'51': task056_multirc_classify_correct_answer
'52': task057_multirc_classify_incorrect_answer
'53': task058_multirc_question_answering
'54': task059_ropes_story_generation
'55': task060_ropes_question_generation
'56': task061_ropes_answer_generation
'57': task062_bigbench_repeat_copy_logic
'58': task063_first_i_elements
'59': task064_all_elements_except_first_i
'60': task065_timetravel_consistent_sentence_classification
'61': task066_timetravel_binary_consistency_classification
'62': task067_abductivenli_answer_generation
'63': task068_abductivenli_incorrect_answer_generation
'64': task069_abductivenli_classification
'65': task070_abductivenli_incorrect_classification
'66': task071_abductivenli_answer_generation
'67': task072_abductivenli_answer_generation
'68': task073_commonsenseqa_answer_generation
'69': task074_squad1.1_question_generation
'70': task075_squad1.1_answer_generation
'71': task077_splash_explanation_to_sql
'72': task078_all_elements_except_last_i
'73': task079_conala_concat_strings
'74': task080_piqa_answer_generation
'75': task081_piqa_wrong_answer_generation
'76': task082_babi_t1_single_supporting_fact_question_generation
'77': task083_babi_t1_single_supporting_fact_answer_generation
'78': task084_babi_t1_single_supporting_fact_identify_relevant_fact
'79': task085_unnatural_addsub_arithmetic
'80': task086_translated_symbol_arithmetic
'81': task087_new_operator_addsub_arithmetic
'82': task088_identify_typo_verification
'83': task089_swap_words_verification
'84': task090_equation_learner_algebra
'85': task091_all_elements_from_index_i_to_j
'86': task092_check_prime_classification
'87': task093_conala_normalize_lists
'88': task094_conala_calculate_mean
'89': task095_conala_max_absolute_value
'90': task096_conala_list_index_subtraction
'91': task097_conala_remove_duplicates
'92': task098_conala_list_intersection
'93': task099_reverse_elements_between_index_i_and_j
'94': task1000_pib_translation_tamil_malayalam
'95': task1001_pib_translation_gujarati_urdu
'96': task1002_pib_translation_urdu_gujarati
'97': task1003_pib_translation_bengali_malayalam
'98': task1004_pib_translation_malayalam_bengali
'99': task1005_pib_translation_malayalam_punjabi
'100': task1006_pib_translation_punjabi_malayalam
'101': task1007_pib_translation_english_punjabi
'102': task1008_pib_translation_punjabi_english
'103': task1009_pib_translation_bengali_hindi
'104': task100_concatenate_all_elements_from_index_i_to_j
'105': task1010_pib_translation_hindi_bengali
'106': task1011_pib_translation_hindi_punjabi
'107': task1012_pib_translation_punjabi_hindi
'108': task1013_pib_translation_gujarati_telugu
'109': task1014_pib_translation_telugu_gujarati
'110': task1015_pib_translation_punjabi_tamil
'111': task1016_pib_translation_tamil_punjabi
'112': task1017_pib_translation_hindi_malayalam
'113': task1018_pib_translation_malayalam_hindi
'114': task1019_pib_translation_oriya_telugu
'115': task101_reverse_and_concatenate_all_elements_from_index_i_to_j
'116': task1020_pib_translation_telugu_oriya
'117': task1021_pib_translation_english_malayalam
'118': task1022_pib_translation_malayalam_english
'119': task1023_pib_translation_english_hindi
'120': task1024_pib_translation_hindi_english
'121': task1025_pib_translation_bengali_punjabi
'122': task1026_pib_translation_punjabi_bengali
'123': task1027_pib_translation_marathi_telugu
'124': task1028_pib_translation_telugu_marathi
'125': task1029_pib_translation_marathi_punjabi
'126': task102_commongen_sentence_generation
'127': task1030_pib_translation_punjabi_marathi
'128': task1031_pib_translation_bengali_telugu
'129': task1032_pib_translation_telugu_bengali
'130': task1033_pib_translation_gujarati_hindi
'131': task1034_pib_translation_hindi_gujarati
'132': task1035_pib_translation_tamil_urdu
'133': task1036_pib_translation_urdu_tamil
'134': task1037_pib_translation_telugu_urdu
'135': task1038_pib_translation_urdu_telugu
'136': task1039_pib_translation_oriya_punjabi
'137': task103_facts2story_long_text_generation
'138': task1040_pib_translation_punjabi_oriya
'139': task1041_pib_translation_gujarati_malayalam
'140': task1042_pib_translation_malayalam_gujarati
'141': task1043_pib_translation_gujarati_punjabi
'142': task1044_pib_translation_punjabi_gujarati
'143': task1045_pib_translation_hindi_telugu
'144': task1046_pib_translation_telugu_hindi
'145': task1047_pib_translation_english_telugu
'146': task1048_pib_translation_telugu_english
'147': task1049_pib_translation_malayalam_telugu
'148': task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation
'149': task1050_pib_translation_telugu_malayalam
'150': task1051_pib_translation_punjabi_urdu
'151': task1052_pib_translation_urdu_punjabi
'152': task1053_pib_translation_hindi_urdu
'153': task1054_pib_translation_urdu_hindi
'154': task1055_pib_translation_marathi_oriya
'155': task1056_pib_translation_oriya_marathi
'156': task1057_pib_translation_english_urdu
'157': task1058_pib_translation_urdu_english
'158': task1059_pib_translation_malayalam_urdu
'159': task105_story_cloze-rocstories_sentence_generation
'160': task1060_pib_translation_urdu_malayalam
'161': task1061_pib_translation_bengali_marathi
'162': task1062_pib_translation_marathi_bengali
'163': task1063_pib_translation_gujarati_tamil
'164': task1064_pib_translation_tamil_gujarati
'165': task1065_pib_translation_punjabi_telugu
'166': task1066_pib_translation_telugu_punjabi
'167': task1067_pib_translation_bengali_gujarati
'168': task1068_pib_translation_gujarati_bengali
'169': task1069_pib_translation_bengali_urdu
'170': task106_scruples_ethical_judgment
'171': task1070_pib_translation_urdu_bengali
'172': task1071_pib_translation_malayalam_marathi
'173': task1072_pib_translation_marathi_malayalam
'174': task1073_pib_translation_oriya_tamil
'175': task1074_pib_translation_tamil_oriya
'176': task1075_pib_translation_tamil_telugu
'177': task1076_pib_translation_telugu_tamil
'178': task1077_pib_translation_gujarati_oriya
'179': task1078_pib_translation_oriya_gujarati
'180': task1079_pib_translation_english_gujarati
'181': task107_splash_question_to_sql
'182': task1080_pib_translation_gujarati_english
'183': task1081_pib_translation_hindi_marathi
'184': task1082_pib_translation_marathi_hindi
'185': task1083_pib_translation_marathi_tamil
'186': task1084_pib_translation_tamil_marathi
'187': task1085_pib_translation_english_marathi
'188': task1086_pib_translation_marathi_english
'189': task1087_two_number_sum
'190': task1088_array_of_products
'191': task1089_check_monotonic_array
'192': task108_contextualabusedetection_classification
'193': task1090_ted_translation_en_gl
'194': task1091_ted_translation_en_it
'195': task1092_ted_translation_en_pl
'196': task1093_ted_translation_en_fa
'197': task1094_ted_translation_en_pt
'198': task1095_ted_translation_ja_gl
'199': task1096_ted_translation_ja_it
'200': task1097_ted_translation_ja_pl
'201': task1098_ted_translation_ja_fa
'202': task1099_ted_translation_ja_pt
'203': task109_smsspamcollection_spamsmsdetection
'204': task1100_ted_translation_es_gl
'205': task1101_ted_translation_es_it
'206': task1102_ted_translation_es_pl
'207': task1103_ted_translation_es_fa
'208': task1104_ted_translation_es_pt
'209': task1105_ted_translation_ar_gl
'210': task1106_ted_translation_ar_it
'211': task1107_ted_translation_ar_pl
'212': task1108_ted_translation_ar_fa
'213': task1109_ted_translation_ar_pt
'214': task1110_ted_translation_he_gl
'215': task1111_ted_translation_he_it
'216': task1112_ted_translation_he_pl
'217': task1113_ted_translation_he_fa
'218': task1114_ted_translation_he_pt
'219': task1115_alt_ja_id_translation
'220': task1116_alt_id_ja_translation
'221': task1117_alt_ja_id_answer_generation
'222': task1118_alt_ja_fil_translation
'223': task1119_alt_fil_ja_translation
'224': task111_asset_sentence_simplification
'225': task1120_alt_ja_fil_answer_generation
'226': task1121_alt_ja_khm_translation
'227': task1122_alt_khm_ja_translation
'228': task1123_alt_ja_khm_answer_generation
'229': task1124_alt_ja_lo_translation
'230': task1125_alt_lo_ja_translation
'231': task1126_alt_ja_lo_answer_generation
'232': task1127_alt_ja_th_translation
'233': task1128_alt_th_ja_translation
'234': task1129_alt_ja_th_answer_generation
'235': task112_asset_simple_sentence_identification
'236': task1130_xcsr_vi_commonsense_mc_classification
'237': task1131_xcsr_es_commonsense_mc_classification
'238': task1132_xcsr_ur_commonsense_mc_classification
'239': task1133_xcsr_nl_commonsense_mc_classification
'240': task1134_xcsr_hi_commonsense_mc_classification
'241': task1135_xcsr_en_commonsense_mc_classification
'242': task1136_xcsr_fr_commonsense_mc_classification
'243': task1137_xcsr_pt_commonsense_mc_classification
'244': task1138_xcsr_de_commonsense_mc_classification
'245': task1139_xcsr_ru_commonsense_mc_classification
'246': task113_count_frequency_of_letter
'247': task1140_xcsr_pl_commonsense_mc_classification
'248': task1141_xcsr_zh_commonsense_mc_classification
'249': task1142_xcsr_ar_commonsense_mc_classification
'250': task1143_xcsr_it_commonsense_mc_classification
'251': task1144_xcsr_sw_commonsense_mc_classification
'252': task1145_xcsr_jap_commonsense_mc_classification
'253': task1146_country_capital
'254': task1147_country_currency
'255': task1148_maximum_ascii_value
'256': task1149_item_check_edible
'257': task114_is_the_given_word_longest
'258': task1150_delete_max_min
'259': task1151_swap_max_min
'260': task1152_bard_analogical_reasoning_causation
'261': task1153_bard_analogical_reasoning_affordance
'262': task1154_bard_analogical_reasoning_travel
'263': task1155_bard_analogical_reasoning_trash_or_treasure
'264': task1156_bard_analogical_reasoning_tools
'265': task1157_bard_analogical_reasoning_rooms_for_containers
'266': task1158_bard_analogical_reasoning_manipulating_items
'267': task1159_bard_analogical_reasoning_containers
'268': task115_help_advice_classification
'269': task1161_coda19_title_generation
'270': task1162_coda19_title_classification
'271': task1163_coda19_section_classification
'272': task1164_coda19_section_correction_classification
'273': task1168_xcopa_commonsense_reasoning_ht
'274': task1169_xcopa_commonsense_cause_effect_ht
'275': task116_com2sense_commonsense_reasoning
'276': task1170_xcopa_commonsense_reasoning_id
'277': task1171_xcopa_commonsense_cause_effect_id
'278': task1172_xcopa_commonsense_reasoning_it
'279': task1173_xcopa_commonsense_cause_effect_it
'280': task1174_xcopa_commonsense_reasoning_sw
'281': task1175_xcopa_commonsense_cause_effect_sw
'282': task1176_xcopa_commonsense_reasoning_ta
'283': task1177_xcopa_commonsense_cause_effect_ta
'284': task1178_xcopa_commonsense_reasoning_th
'285': task1179_xcopa_commonsense_cause_effect_th
'286': task117_spl_translation_en_de
'287': task1180_xcopa_commonsense_reasoning_tr
'288': task1181_xcopa_commonsense_cause_effect_tr
'289': task1182_xcopa_commonsense_reasoning_vi
'290': task1183_xcopa_commonsense_cause_effect_vi
'291': task1184_xcopa_commonsense_reasoning_zh
'292': task1185_xcopa_commonsense_cause_effect_zh
'293': task1186_nne_hrngo_classification
'294': task1187_politifact_classification
'295': task1188_count_max_freq_char
'296': task1189_check_char_in_string
'297': task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation
'298': task1190_add_integer_to_list
'299': task1191_food_veg_nonveg
'300': task1192_food_flavor_profile
'301': task1193_food_course_classification
'302': task1194_kth_largest_element
'303': task1195_disflqa_disfluent_to_fluent_conversion
'304': task1196_atomic_classification_oeffect
'305': task1197_atomic_classification_oreact
'306': task1198_atomic_classification_owant
'307': task1199_atomic_classification_xattr
'308': task119_semeval_2019_task10_geometric_mathematical_answer_generation
'309': task1200_atomic_classification_xeffect
'310': task1201_atomic_classification_xintent
'311': task1202_atomic_classification_xneed
'312': task1203_atomic_classification_xreact
'313': task1204_atomic_classification_hinderedby
'314': task1205_atomic_classification_isafter
'315': task1206_atomic_classification_isbefore
'316': task1207_atomic_classification_atlocation
'317': task1208_atomic_classification_xreason
'318': task1209_atomic_classification_objectuse
'319': task120_zest_text_modification
'320': task1210_atomic_classification_madeupof
'321': task1211_atomic_classification_hassubevent
'322': task1212_atomic_classification_hasproperty
'323': task1213_atomic_classification_desires
'324': task1214_atomic_classification_xwant
'325': task1215_atomic_classification_capableof
'326': task1216_atomic_classification_causes
'327': task1217_atomic_answer_generation
'328': task1218_ted_translation_en_ja
'329': task1219_ted_translation_en_es
'330': task121_zest_text_modification
'331': task1220_ted_translation_en_ar
'332': task1221_ted_translation_en_he
'333': task1222_ted_translation_ja_en
'334': task1223_ted_translation_ja_es
'335': task1224_ted_translation_ja_ar
'336': task1225_ted_translation_ja_he
'337': task1226_ted_translation_es_en
'338': task1227_ted_translation_es_ja
'339': task1228_ted_translation_es_ar
'340': task1229_ted_translation_es_he
'341': task122_conala_list_index_addition
'342': task1230_ted_translation_ar_en
'343': task1231_ted_translation_ar_ja
'344': task1232_ted_translation_ar_es
'345': task1233_ted_translation_ar_he
'346': task1234_ted_translation_he_en
'347': task1235_ted_translation_he_ja
'348': task1236_ted_translation_he_es
'349': task1237_ted_translation_he_ar
'350': task1238_ted_translation_gl_en
'351': task1239_ted_translation_gl_ja
'352': task123_conala_sort_dictionary
'353': task1240_ted_translation_gl_es
'354': task1241_ted_translation_gl_ar
'355': task1242_ted_translation_gl_he
'356': task1243_ted_translation_gl_it
'357': task1244_ted_translation_gl_pl
'358': task1245_ted_translation_gl_fa
'359': task1246_ted_translation_gl_pt
'360': task1247_ted_translation_it_en
'361': task1248_ted_translation_it_ja
'362': task1249_ted_translation_it_es
'363': task124_conala_pair_averages
'364': task1250_ted_translation_it_ar
'365': task1251_ted_translation_it_he
'366': task1252_ted_translation_it_gl
'367': task1253_ted_translation_it_pl
'368': task1254_ted_translation_it_fa
'369': task1255_ted_translation_it_pt
'370': task1256_ted_translation_pl_en
'371': task1257_ted_translation_pl_ja
'372': task1258_ted_translation_pl_es
'373': task1259_ted_translation_pl_ar
'374': task125_conala_pair_differences
'375': task1260_ted_translation_pl_he
'376': task1261_ted_translation_pl_gl
'377': task1262_ted_translation_pl_it
'378': task1263_ted_translation_pl_fa
'379': task1264_ted_translation_pl_pt
'380': task1265_ted_translation_fa_en
'381': task1266_ted_translation_fa_ja
'382': task1267_ted_translation_fa_es
'383': task1268_ted_translation_fa_ar
'384': task1269_ted_translation_fa_he
'385': task126_scan_structured_text_generation_command_action_all
'386': task1270_ted_translation_fa_gl
'387': task1271_ted_translation_fa_it
'388': task1272_ted_translation_fa_pl
'389': task1273_ted_translation_fa_pt
'390': task1274_ted_translation_pt_en
'391': task1275_ted_translation_pt_ja
'392': task1276_ted_translation_pt_es
'393': task1277_ted_translation_pt_ar
'394': task1278_ted_translation_pt_he
'395': task1279_ted_translation_pt_gl
'396': task127_scan_long_text_generation_action_command_all
'397': task1280_ted_translation_pt_it
'398': task1281_ted_translation_pt_pl
'399': task1282_ted_translation_pt_fa
'400': task1283_hrngo_quality_classification
'401': task1284_hrngo_informativeness_classification
'402': task1285_kpa_keypoint_matching
'403': task1286_openbookqa_question_answering
'404': task1287_glue_qqp_paraphrasing
'405': task1288_glue_mrpc_paraphrasing
'406': task1289_trec_classification
'407': task128_scan_structured_text_generation_command_action_short
'408': task1290_xsum_summarization
'409': task1291_multi_news_summarization
'410': task1292_yelp_review_full_text_categorization
'411': task1293_kilt_tasks_hotpotqa_question_answering
'412': task1294_wiki_qa_answer_verification
'413': task1295_adversarial_qa_question_answering
'414': task1296_wiki_hop_question_answering
'415': task1297_qasc_question_answering
'416': task129_scan_long_text_generation_action_command_short
'417': task1308_amazonreview_category_classification
'418': task1309_amazonreview_summary_classification
'419': task130_scan_structured_text_generation_command_action_long
'420': task1310_amazonreview_rating_classification
'421': task1311_amazonreview_rating_classification
'422': task1312_amazonreview_polarity_classification
'423': task1313_amazonreview_polarity_classification
'424': task1314_country_abbreviation
'425': task1315_find_range_array
'426': task1316_remove_duplicates_string
'427': task1317_country_calling_code
'428': task1318_country_national_dish
'429': task1319_country_by_barcode_prefix
'430': task131_scan_long_text_generation_action_command_long
'431': task1320_country_domain_tld
'432': task1321_country_continent
'433': task1322_country_government_type
'434': task1323_open_subtitles_hi_en_translation
'435': task1324_open_subtitles_te_en_translation
'436': task1325_qa_zre_question_generation_on_subject_relation
'437': task1326_qa_zre_question_generation_from_answer
'438': task1327_qa_zre_answer_generation_from_question
'439': task1328_qa_zre_relation_generation_from_question
'440': task1329_open_subtitles_en_hi_translation
'441': task132_dais_text_modification
'442': task1330_open_subtitles_en_te_translation
'443': task1331_reverse_array
'444': task1332_check_leap_year
'445': task1333_check_validity_date_ddmmyyyy
'446': task1334_sqac_answer_generation
'447': task1335_sqac_question_generation
'448': task1336_peixian_equity_evaluation_corpus_gender_classifier
'449': task1338_peixian_equity_evaluation_corpus_sentiment_classifier
'450': task1339_peixian_equity_evaluation_corpus_text_completion
'451': task133_winowhy_reason_plausibility_detection
'452': task1340_msr_text_compression_compression
'453': task1341_msr_text_classification
'454': task1342_amazon_us_reviews_title
'455': task1343_amazon_us_reviews_rating
'456': task1344_glue_entailment_classification
'457': task1345_glue_qqp_question_paraprashing
'458': task1346_glue_cola_grammatical_correctness_classification
'459': task1347_glue_sts-b_similarity_classification
'460': task134_winowhy_reason_generation
'461': task1350_opus100_translation_en_gu
'462': task1351_opus100_translation_gu_en
'463': task1352_hind_encorp_translation_hi_en
'464': task1353_hind_encorp_translation_en_hi
'465': task1354_sent_comp_classification
'466': task1355_sent_comp_summarization
'467': task1356_xlsum_title_generation
'468': task1357_xlsum_summary_generation
'469': task1358_xlsum_title_generation
'470': task1359_numer_sense_answer_generation
'471': task135_winowhy_wrong_reason_generation
'472': task1360_numer_sense_multiple_choice_qa_generation
'473': task1361_movierationales_classification
'474': task1364_hans_answer_generation
'475': task1365_opustedtalks_translation
'476': task1366_healthfact_classification
'477': task1367_opustedtalks_translation
'478': task1368_healthfact_sentence_generation
'479': task1369_healthfact_sentence_generation
'480': task136_winowhy_knowledge_categorization
'481': task1370_newscomm_classification
'482': task1371_newscomm_translation
'483': task1373_newscomm_translation
'484': task1374_newscomm_translation
'485': task1375_newscomm_translation
'486': task1376_newscomm_translation
'487': task1377_newscomm_translation
'488': task1378_quarel_correct_answer_generation
'489': task1379_quarel_incorrect_answer_generation
'490': task137_detoxifying-lms_classification_toxicity
'491': task1380_quarel_correct_option_generation
'492': task1381_quarel_incorrect_option_generation
'493': task1382_quarel_write_correct_answer
'494': task1383_quarel_write_incorrect_answer
'495': task1384_deal_or_no_dialog_classification
'496': task1385_anli_r1_entailment
'497': task1386_anli_r2_entailment
'498': task1387_anli_r3_entailment
'499': task1388_cb_entailment
'500': task1389_hellaswag_completion
'501': task138_detoxifying-lms_classification_fluency
'502': task1390_wscfixed_coreference
'503': task1391_winogrande_easy_answer_generation
'504': task1392_superglue_multirc_answer_verification
'505': task1393_superglue_copa_text_completion
'506': task1394_meta_woz_task_classification
'507': task1395_europa_ecdc_tm_en_sv_translation
'508': task1396_europa_ecdc_tm_en_de_translation
'509': task1397_europa_ecdc_tm_fr_en_translation
'510': task1398_obqa_question_generation
'511': task1399_obqa_answer_generation
'512': task139_detoxifying-lms_classification_topicality
'513': task1400_obqa_incorrect_answer_generation
'514': task1401_obqa_sentence_generation
'515': task1402_clue_question_generation
'516': task1403_check_validity_date_mmddyyyy
'517': task1404_date_conversion
'518': task1405_find_median
'519': task1406_kth_smallest_element
'520': task1407_dart_question_generation
'521': task1408_dart_similarity_classification
'522': task1409_dart_text_generation
'523': task140_detoxifying-lms_classification_style
'524': task1410_dart_relationship_extraction
'525': task1411_dart_subject_identification
'526': task1412_web_questions_question_answering
'527': task1413_dart_object_identification
'528': task1414_ajgt_twitter_ar_classification
'529': task1415_youtube_caption_corrections_grammar_correction
'530': task1416_youtube_caption_corrections_incorrect_grammar_classification
'531': task1418_bless_semantic_relation_classification
'532': task1419_mathqa_gain
'533': task141_odd-man-out_classification_category
'534': task1420_mathqa_general
'535': task1421_mathqa_other
'536': task1422_mathqa_physics
'537': task1423_mathqa_geometry
'538': task1424_mathqa_probability
'539': task1425_country_iso_numeric
'540': task1426_country_independence_year
'541': task1427_country_region_in_world
'542': task1428_country_surface_area
'543': task1429_evalution_semantic_relation_classification
'544': task142_odd-man-out_classification_no_category
'545': task1431_head_qa_answer_generation
'546': task1432_head_qa_language_translation_en_to_es
'547': task1433_head_qa_language_translation_es_to_en
'548': task1434_head_qa_classification
'549': task1435_ro_sts_parallel_language_translation_ro_to_en
'550': task1436_ro_sts_parallel_language_translation_en_to_ro
'551': task1437_doqa_cooking_question_generation
'552': task1438_doqa_cooking_answer_generation
'553': task1439_doqa_cooking_isanswerable
'554': task143_odd-man-out_classification_generate_category
'555': task1440_doqa_movies_question_generation
'556': task1441_doqa_movies_answer_generation
'557': task1442_doqa_movies_isanswerable
'558': task1443_string_to_number
'559': task1444_round_power_of_two
'560': task1445_closest_integers
'561': task1446_farthest_integers
'562': task1447_drug_extraction_ade
'563': task1448_disease_entity_extraction_ncbi_dataset
'564': task1449_disease_entity_extraction_bc5cdr_dataset
'565': task144_subjqa_question_answering
'566': task1451_drug_dose_extraction
'567': task1452_location_entity_extraction_btc_corpus
'568': task1453_person_entity_extraction_btc_corpus
'569': task145_afs_argument_similarity_death_penalty
'570': task146_afs_argument_similarity_gun_control
'571': task1479_organization_entity_extraction_btc_corpus
'572': task147_afs_argument_similarity_gay_marriage
'573': task1480_gene_extraction_jnlpba_dataset
'574': task1481_gene_extraction_bc2gm_dataset
'575': task1482_gene_extraction_chemprot_dataset
'576': task1483_chemical_extraction_chemprot_dataset
'577': task1484_gene_extraction_linnaeus_dataset
'578': task1485_organ_extraction_anem_dataset
'579': task1486_cell_extraction_anem_dataset
'580': task1487_organism_substance_extraction_anem_dataset
'581': task1488_sarcasmdetection_headline_classification
'582': task1489_sarcasmdetection_tweet_classification
'583': task148_afs_argument_quality_gay_marriage
'584': task1490_bengali_personal_hate_speech_binary_classification
'585': task1491_bengali_political_hate_speech_binary_classification
'586': task1492_bengali_religious_hate_speech_binary_classification
'587': task1493_bengali_geopolitical_hate_speech_binary_classification
'588': task1494_bengali_hate_speech_classification
'589': task1495_adverse_drug_event_classification
'590': task1496_bengali_reviews_sentiment_classification
'591': task1497_bengali_book_reviews_sentiment_classification
'592': task1498_24hour_to_12hour_clock
'593': task1499_dstc3_summarization
'594': task149_afs_argument_quality_death_penalty
'595': task1500_dstc3_classification
'596': task1501_dstc3_answer_generation
'597': task1502_hatexplain_classification
'598': task1503_hatexplain_classification
'599': task1504_hatexplain_answer_generation
'600': task1505_root09_semantic_relation_classification
'601': task1506_celebrity_minimal_dob_span
'602': task1507_boolean_temporal_reasoning
'603': task1508_wordnet_antonyms
'604': task1509_evalution_antonyms
'605': task150_afs_argument_quality_gun_control
'606': task1510_evalution_relation_extraction
'607': task1514_flores_translation_entone
'608': task1515_imppres_longtextgeneration
'609': task1516_imppres_naturallanguageinference
'610': task1517_limit_classfication
'611': task1518_limit_answer_generation
'612': task1519_qa_srl_question_generation
'613': task151_tomqa_find_location_easy_clean
'614': task1520_qa_srl_answer_generation
'615': task1529_scitail1.1_classification
'616': task152_tomqa_find_location_easy_noise
'617': task1530_scitail1.1_sentence_generation
'618': task1531_daily_dialog_type_classification
'619': task1532_daily_dialog_emotion_classification
'620': task1533_daily_dialog_formal_classification
'621': task1534_daily_dialog_question_classification
'622': task1535_daily_dialog_uniqueness_classification
'623': task1536_daily_dialog_happiness_classification
'624': task1537_tamil_offenseval_dravidian_classification
'625': task1538_malayalam_offenseval_dravidian_classification
'626': task1539_kannada_offenseval_dravidian_classification
'627': task153_tomqa_find_location_hard_clean
'628': task1540_parsed_pdfs_summarization
'629': task1541_agnews_classification
'630': task1542_every_ith_element_from_starting
'631': task1543_conll2002_parts_of_speech_tagging_answer_generation
'632': task1544_conll2002_named_entity_recognition_answer_generation
'633': task1545_conll2002_person_name_extraction_answer_generation
'634': task1546_conll2002_location_name_extraction_answer_generation
'635': task1548_wiqa_binary_classification
'636': task1549_wiqa_answer_generation_missing_step
'637': task154_tomqa_find_location_hard_noise
'638': task1551_every_ith_element_from_kth_element
'639': task1552_scitail_question_generation
'640': task1553_cnn_dailymail_summarization
'641': task1554_scitail_classification
'642': task1555_scitail_answer_generation
'643': task1556_scitail_passage_generation
'644': task1557_jfleg_answer_generation
'645': task1558_jfleg_incorrect_answer_generation
'646': task1559_blimp_binary_classification
'647': task155_count_nouns_verbs
'648': task1560_blimp_binary_classification
'649': task1561_clickbait_new_bg_summarization
'650': task1562_zest_text_modification
'651': task1564_triviaqa_answer_generation
'652': task1565_triviaqa_classification
'653': task1566_propara_structured_text_generation
'654': task1567_propara_question_generation
'655': task1568_propara_classification
'656': task1569_cmrc2018_question_generation
'657': task156_codah_classification_adversarial
'658': task1570_cmrc2018_answer_generation
'659': task1571_cmrc2018_answer_generation_starting_index
'660': task1572_samsum_summary
'661': task1573_samsum_classification
'662': task1574_amazon_reviews_multi_language_identification
'663': task1575_amazon_reviews_multi_sentiment_classification
'664': task1576_amazon_reviews_multi_english_language_classification
'665': task1577_amazon_reviews_multi_japanese_language_classification
'666': task1579_gigaword_incorrect_summarization
'667': task157_count_vowels_and_consonants
'668': task1580_eqasc-perturbed_question_generation
'669': task1581_eqasc-perturbed_answer_generation
'670': task1582_bless_hypernym_generation
'671': task1583_bless_meronym_classification
'672': task1584_evalution_meronym_classification
'673': task1585_root09_hypernym_generation
'674': task1586_scifact_title_generation
'675': task1587_scifact_classification
'676': task1588_tecla_classification
'677': task1589_scifact_classification
'678': task158_count_frequency_of_words
'679': task1590_diplomacy_text_generation
'680': task1591_allocine_classification
'681': task1592_yahoo_answers_topics_classfication
'682': task1593_yahoo_answers_topics_classification
'683': task1594_yahoo_answers_topics_question_generation
'684': task1595_event2mind_text_generation_1
'685': task1596_event2mind_text_generation_2
'686': task1597_nyc_slot_filling
'687': task1598_nyc_long_text_generation
'688': task1599_smcalflow_classification
'689': task159_check_frequency_of_words_in_sentence_pair
'690': task1600_smcalflow_sentence_generation
'691': task1601_webquestions_answer_generation
'692': task1602_webquestion_question_genreation
'693': task1603_smcalflow_sentence_generation
'694': task1604_ethos_text_classification
'695': task1605_ethos_text_classification
'696': task1606_ethos_text_classification
'697': task1607_ethos_text_classification
'698': task1608_xquad_en_answer_generation
'699': task1609_xquad_en_question_generation
'700': task160_replace_letter_in_a_sentence
'701': task1610_xquad_es_answer_generation
'702': task1611_xquad_es_question_generation
'703': task1612_sick_label_classification
'704': task1613_sick_given_category_generate_sentence
'705': task1614_sick_text_modify
'706': task1615_sick_tclassify_b_relation_a
'707': task1616_cc_alligned_translate_eng_tel
'708': task1617_cc_alligned_translate_tel_eng
'709': task1618_cc_alligned_classify_tel_eng
'710': task1619_menyo20k-mt_en_yo_translation
'711': task161_count_words_containing_letter
'712': task1620_menyo20k-mt_yo_en_translation
'713': task1621_menyo20k-mt_en_yo_language_identification
'714': task1622_disfl_qa_text_modication
'715': task1623_disfl_qa_disfluent_question_classification
'716': task1624_disfl_qa_question_yesno_classification
'717': task1625_disfl_qa_asnwer_generation
'718': task1626_copa_hr_question_answering
'719': task1627_copa_hr_classification
'720': task1628_copa_hr_question_answering
'721': task1629_copa_hr_classification
'722': task162_count_words_starting_with_letter
'723': task1630_openpi_classification
'724': task1631_openpi_answer_generation
'725': task1637_doqa2.1_cooking_text_summarization
'726': task1638_doqa2.1_movies_text_summarization
'727': task1639_doqa2.1_travel_text_summarization
'728': task163_count_words_ending_with_letter
'729': task1640_aqa1.0_answerable_unanswerable_question_classification
'730': task1645_medical_question_pair_dataset_text_classification
'731': task1646_dataset_card_for_catalonia_independence_corpus_text_classification
'732': task1647_opus_books_en-pt_translation
'733': task1648_opus_books_en-sv_translation
'734': task1649_opus_books_en-no_translation
'735': task164_mcscript_question_answering_text
'736': task1650_opus_books_en-fi_translation
'737': task1651_opus_books_en-es__translation
'738': task1652_opus_books_ca-en_translation
'739': task1654_mkb_translation
'740': task1655_mkb_translation
'741': task1656_gooaq_answer_generation
'742': task1657_gooaq_question_generation
'743': task1659_title_generation
'744': task165_mcscript_question_answering_commonsense
'745': task1660_super_glue_question_generation
'746': task1661_super_glue_classification
'747': task1662_cedr_ru_classification
'748': task1663_cedr_ru_incorrect_classification
'749': task1664_winobias_text_generation
'750': task1665_trainglecopa_question_generation
'751': task1666_cail2018_answer_generation
'752': task1667_cail2018_answer_generation
'753': task1669_md_gender_bias_text_modification
'754': task166_clariq_sentence_generation
'755': task1670_md_gender_bias_text_modification
'756': task1676_xquad-ca_translation
'757': task1677_xquad-ca_translation
'758': task1678_mathqa_answer_selection
'759': task167_strategyqa_question_generation
'760': task1685_menyo20k_translation
'761': task1686_menyo20k_translation
'762': task1689_qed_amara_translation
'763': task168_strategyqa_question_decomposition
'764': task1690_qed_amara_translation
'765': task1691_qed_amara_translation
'766': task1692_qed_amara_translation
'767': task169_strategyqa_sentence_generation
'768': task1703_ljspeech_textmodification
'769': task1704_ljspeech_textmodification
'770': task1705_ljspeech_classification
'771': task1706_ljspeech_classification
'772': task170_hotpotqa_answer_generation
'773': task1711_poki_text_generation
'774': task1712_poki_classification
'775': task1713_convai3_sentence_generation
'776': task1714_convai3_sentence_generation
'777': task171_spl_translation_en_es
'778': task1720_civil_comments_toxicity_classification
'779': task1721_civil_comments_obscenity_classification
'780': task1722_civil_comments_threat_classification
'781': task1723_civil_comments_sexuallyexplicit_classification
'782': task1724_civil_comments_insult_classification
'783': task1725_civil_comments_severtoxicity_classification
'784': task1726_mathqa_correct_answer_generation
'785': task1727_wiqa_what_is_the_effect
'786': task1728_web_nlg_data_to_text
'787': task1729_personachat_generate_next
'788': task172_spl_translation_en_fa
'789': task1730_personachat_choose_next
'790': task1731_quartz_question_answering
'791': task173_spl_translation_en_it
'792': task174_spl_translation_en_ja
'793': task175_spl_translation_en_pl
'794': task177_para-nmt_paraphrasing
'795': task178_quartz_question_answering
'796': task179_participant_extraction
'797': task180_intervention_extraction
'798': task181_outcome_extraction
'799': task182_duorc_question_generation
'800': task183_rhyme_generation
'801': task184_snli_entailment_to_neutral_text_modification
'802': task185_snli_contradiction_to_neutral_text_modification
'803': task186_snli_contradiction_to_entailment_text_modification
'804': task187_snli_entailment_to_contradiction_text_modification
'805': task188_snli_neutral_to_entailment_text_modification
'806': task189_snli_neutral_to_contradiction_text_modification
'807': task190_snli_classification
'808': task191_hotpotqa_question_generation
'809': task192_hotpotqa_sentence_generation
'810': task193_duorc_question_generation
'811': task194_duorc_answer_generation
'812': task195_sentiment140_classification
'813': task196_sentiment140_answer_generation
'814': task197_mnli_domain_answer_generation
'815': task198_mnli_domain_classification
'816': task199_mnli_classification
'817': task200_mnli_entailment_classification
'818': task201_mnli_neutral_classification
'819': task202_mnli_contradiction_classification
'820': task203_mnli_sentence_generation
'821': task204_mnli_same_genre_classification
'822': task205_remove_even_elements
'823': task206_collatz_conjecture
'824': task207_max_element_lists
'825': task208_combinations_of_list
'826': task209_stancedetection_classification
'827': task213_rocstories_correct_ending_classification
'828': task214_rocstories_incorrect_ending_classification
'829': task215_rocstories_incorrect_answer_generation
'830': task216_rocstories_correct_answer_generation
'831': task217_rocstories_ordering_answer_generation
'832': task218_rocstories_swap_order_answer_generation
'833': task219_rocstories_title_answer_generation
'834': task220_rocstories_title_classification
'835': task221_rocstories_two_choice_classification
'836': task222_rocstories_two_chioce_slotting_classification
'837': task223_quartz_explanation_generation
'838': task224_scruples_anecdotes_ethical_judgment
'839': task225_english_language_answer_generation
'840': task226_english_language_answer_relevance_classification
'841': task227_clariq_classification
'842': task228_arc_answer_generation_easy
'843': task229_arc_answer_generation_hard
'844': task231_iirc_link_classification
'845': task232_iirc_link_number_classification
'846': task233_iirc_link_exists_classification
'847': task234_iirc_passage_line_answer_generation
'848': task235_iirc_question_from_subtext_answer_generation
'849': task236_iirc_question_from_passage_answer_generation
'850': task237_iirc_answer_from_subtext_answer_generation
'851': task238_iirc_answer_from_passage_answer_generation
'852': task239_tweetqa_answer_generation
'853': task240_tweetqa_question_generation
'854': task241_tweetqa_classification
'855': task242_tweetqa_classification
'856': task243_count_elements_in_set_intersection
'857': task244_count_elements_in_set_union
'858': task245_check_presence_in_set_intersection
'859': task246_dream_question_generation
'860': task247_dream_answer_generation
'861': task248_dream_classification
'862': task249_enhanced_wsc_pronoun_disambiguation
'863': task250_spl_translation_en_ar
'864': task251_spl_translation_en_fi
'865': task252_spl_translation_en_tr
'866': task253_spl_translation_en_zh
'867': task254_spl_translation_fi_en
'868': task255_spl_translation_it_en
'869': task256_spl_translation_de_en
'870': task257_spl_translation_ar_en
'871': task258_spl_translation_fa_en
'872': task259_spl_translation_tr_en
'873': task260_spl_translation_zh_en
'874': task261_spl_translation_es_en
'875': task262_spl_translation_ja_en
'876': task263_spl_translation_pl_en
'877': task264_paper_reviews_accept_or_reject_classification
'878': task265_paper_reviews_language_identification
'879': task266_paper_reviews_reviewer_perspective_classification
'880': task267_concatenate_and_reverse_all_elements_from_index_i_to_j
'881': task268_casehold_legal_answer_generation
'882': task269_csrg_counterfactual_story_generation
'883': task270_csrg_counterfactual_context_generation
'884': task271_europarl_translation
'885': task272_europarl_translation
'886': task273_europarl_classification
'887': task274_overruling_legal_classification
'888': task275_enhanced_wsc_paraphrase_generation
'889': task276_enhanced_wsc_classification
'890': task277_stereoset_sentence_generation_stereotype
'891': task278_stereoset_sentence_generation_antistereotype
'892': task279_stereoset_classification_stereotype
'893': task280_stereoset_classification_stereotype_type
'894': task281_points_of_correspondence
'895': task282_scruples_event_time
'896': task283_dream_incorrect_answer_generation
'897': task284_imdb_classification
'898': task285_imdb_answer_generation
'899': task286_olid_offense_judgment
'900': task287_casehold_legal_incorrect_answer_generation
'901': task288_gigaword_summarization
'902': task289_gigaword_summarization
'903': task290_tellmewhy_question_answerability
'904': task291_semeval_2020_task4_commonsense_validation
'905': task292_storycommonsense_character_text_generation
'906': task293_storycommonsense_emotion_text_generation
'907': task294_storycommonsense_motiv_text_generation
'908': task295_semeval_2020_task4_commonsense_reasoning
'909': task296_storycloze_correct_end_classification
'910': task297_storycloze_incorrect_end_classification
'911': task298_storycloze_correct_end_classification
'912': task299_storycloze_sentence_generation
'913': task300_storycloze_order_generation
'914': task301_record_question_generation
'915': task302_record_classification
'916': task303_record_incorrect_answer_generation
'917': task304_numeric_fused_head_resolution
'918': task305_jeopardy_answer_generation_normal
'919': task306_jeopardy_answer_generation_double
'920': task307_jeopardy_answer_generation_final
'921': task308_jeopardy_answer_generation_all
'922': task309_race_answer_generation
'923': task310_race_classification
'924': task311_race_question_generation
'925': task312_europarl_sv_en_translation
'926': task313_europarl_en_sv_translation
'927': task314_europarl_sv-en_classification
'928': task315_europarl_sv-en_language_identification
'929': task316_crows-pairs_classification_stereotype
'930': task317_crows-pairs_classification_stereotype_type
'931': task318_stereoset_classification_gender
'932': task319_stereoset_classification_profession
'933': task320_stereoset_classification_race
'934': task321_stereoset_classification_religion
'935': task322_jigsaw_classification_threat
'936': task323_jigsaw_classification_sexually_explicit
'937': task324_jigsaw_classification_disagree
'938': task325_jigsaw_classification_identity_attack
'939': task326_jigsaw_classification_obscene
'940': task327_jigsaw_classification_toxic
'941': task328_jigsaw_classification_insult
'942': task329_gap_classification
'943': task330_gap_answer_generation
'944': task331_gap_incorrect_answer_generation
'945': task332_tellmewhy_answer_generation
'946': task333_hateeval_classification_hate_en
'947': task334_hateeval_classification_hate_es
'948': task335_hateeval_classification_aggresive_en
'949': task336_hateeval_classification_aggresive_es
'950': task337_hateeval_classification_individual_en
'951': task338_hateeval_classification_individual_es
'952': task339_record_answer_generation
'953': task340_winomt_classification_gender_pro
'954': task341_winomt_classification_gender_anti
'955': task342_winomt_classification_profession_pro
'956': task343_winomt_classification_profession_anti
'957': task344_hybridqa_answer_generation
'958': task345_hybridqa_answer_generation
'959': task346_hybridqa_classification
'960': task347_hybridqa_incorrect_answer_generation
'961': task348_squad2.0_unanswerable_question_generation
'962': task349_squad2.0_answerable_unanswerable_question_classification
'963': task350_winomt_classification_gender_identifiability_pro
'964': task351_winomt_classification_gender_identifiability_anti
'965': task352_coda-19_classification
'966': task353_casino_classification_negotiation_elicit_pref
'967': task354_casino_classification_negotiation_no_need
'968': task355_casino_classification_negotiation_other_need
'969': task356_casino_classification_negotiation_self_need
'970': task357_casino_classification_negotiation_small_talk
'971': task358_casino_classification_negotiation_uv_part
'972': task359_casino_classification_negotiation_vouch_fair
'973': task360_spolin_yesand_response_generation
'974': task361_spolin_yesand_prompt_response_classification
'975': task362_spolin_yesand_prompt_response_sub_classification
'976': task363_sst2_polarity_classification
'977': task364_regard_social_impact_classification
'978': task365_synthetic_remove_vowels
'979': task366_synthetic_return_primes
'980': task367_synthetic_remove_floats
'981': task368_synthetic_even_or_odd_calculation
'982': task369_synthetic_remove_odds
'983': task370_synthetic_remove_divisible_by_3
'984': task371_synthetic_product_of_list
'985': task372_synthetic_palindrome_numbers
'986': task373_synthetic_round_tens_place
'987': task374_synthetic_pos_or_neg_calculation
'988': task375_classify_type_of_sentence_in_debate
'989': task376_reverse_order_of_words
'990': task377_remove_words_of_given_length
'991': task378_reverse_words_of_given_length
'992': task379_agnews_topic_classification
'993': task380_boolq_yes_no_question
'994': task381_boolq_question_generation
'995': task382_hybridqa_answer_generation
'996': task383_matres_classification
'997': task384_socialiqa_question_classification
'998': task385_socialiqa_incorrect_answer_generation
'999': task386_semeval_2018_task3_irony_detection
'1000': task387_semeval_2018_task3_irony_classification
'1001': task388_torque_token_classification
'1002': task389_torque_generate_temporal_question
'1003': task390_torque_text_span_selection
'1004': task391_causal_relationship
'1005': task392_inverse_causal_relationship
'1006': task393_plausible_result_generation
'1007': task397_semeval_2018_task1_tweet_anger_detection
'1008': task398_semeval_2018_task1_tweet_joy_detection
'1009': task399_semeval_2018_task1_tweet_sadness_detection
'1010': task400_paws_paraphrase_classification
'1011': task401_numeric_fused_head_reference
'1012': task402_grailqa_paraphrase_generation
'1013': task403_creak_commonsense_inference
'1014': task404_grailqa_paraphrase_validation
'1015': task405_narrativeqa_question_generation
'1016': task406_mickey_fr_sentence_perturbation_generation
'1017': task407_mickey_hi_sentence_perturbation_generation
'1018': task408_mickey_it_sentence_perturbation_generation
'1019': task409_mickey_nl_sentence_perturbation_generation
'1020': task410_mickey_ru_sentence_perturbation_generation
'1021': task411_mickey_vi_sentence_perturbation_generation
'1022': task412_mickey_zh_sentence_perturbation_generation
'1023': task413_mickey_en_sentence_perturbation_generation
'1024': task414_mickey_ar_sentence_perturbation_generation
'1025': task415_mickey_bg_sentence_perturbation_generation
'1026': task416_mickey_de_sentence_perturbation_generation
'1027': task417_mickey_es_sentence_perturbation_generation
'1028': task418_persent_title_generation
'1029': task419_persent_answer_generation
'1030': task420_persent_document_sentiment_classification
'1031': task421_persent_sentence_sentiment_classification
'1032': task422_persent_sentence_sentiment_verification
'1033': task423_persent_document_sentiment_verification
'1034': task424_hindienglish_corpora_hi_en_translation
'1035': task425_hindienglish_corpora_en_hi_translation
'1036': task426_hindienglish_corpora_hi-en_classification
'1037': task427_hindienglish_corpora_hi-en_language_identification
'1038': task428_senteval_inversion
'1039': task429_senteval_tense
'1040': task430_senteval_subject_count
'1041': task431_senteval_object_count
'1042': task432_alt_en_hi_translation
'1043': task433_alt_hi_en_translation
'1044': task434_alt_en_hi_answer_generation
'1045': task435_alt_en_ja_translation
'1046': task436_alt_ja_en_translation
'1047': task437_alt_en_ja_answer_generation
'1048': task438_eng_guj_parallel_corpus_en_gu_translation
'1049': task439_eng_guj_parallel_corpus_gu_en_translation
'1050': task440_eng_guj_parallel_corpus_gu-en_classification
'1051': task441_eng_guj_parallel_corpus_gu-en_language_identification
'1052': task442_com_qa_paraphrase_question_generation
'1053': task443_com_qa_ans_question_generation
'1054': task444_com_qa_question_paraphrases_answer_generation
'1055': task446_opus_paracrawl_en_so_translation
'1056': task447_opus_paracrawl_classification
'1057': task448_opus_paracrawl_en_tl_translation
'1058': task449_opus_paracrawl_ig_en_translation
'1059': task450_opus_paracrawl_so_en_translation
'1060': task451_opus_paracrawl_tl_en_translation
'1061': task452_opus_paracrawl_en_ig_translation
'1062': task453_swag_answer_generation
'1063': task454_swag_incorrect_answer_generation
'1064': task455_swag_context_generation
'1065': task456_matres_intention_classification
'1066': task457_matres_conditional_classification
'1067': task458_matres_negation_classification
'1068': task459_matres_static_classification
'1069': task460_qasper_answer_generation
'1070': task461_qasper_question_generation
'1071': task462_qasper_classification
'1072': task463_parsinlu_entailment_classification
'1073': task464_parsinlu_entailment_sentence_generation
'1074': task465_parsinlu_qqp_classification
'1075': task466_parsinlu_qqp_text_modification
'1076': task467_parsinlu_rc_answer_generation
'1077': task468_parsinlu_rc_question_generation
'1078': task469_mrqa_answer_generation
'1079': task470_mrqa_question_generation
'1080': task471_haspart_answer_generation
'1081': task472_haspart_classification
'1082': task473_parsinlu_mc_classification
'1083': task474_parsinlu_mc_classification
'1084': task475_yelp_polarity_classification
'1085': task476_cls_english_books_classification
'1086': task477_cls_english_dvd_classification
'1087': task478_cls_english_music_classification
'1088': task479_cls_german_books_classification
'1089': task480_cls_german_dvd_classification
'1090': task481_cls_german_music_classification
'1091': task482_cls_french_books_classification
'1092': task483_cls_french_dvd_classification
'1093': task484_cls_french_music_classification
'1094': task485_cls_japanese_books_classification
'1095': task486_cls_japanese_dvd_classification
'1096': task487_cls_japanese_music_classification
'1097': task488_extract_all_alphabetical_elements_from_list_in_order
'1098': task489_mwsc_question_generation
'1099': task490_mwsc_options_generation
'1100': task491_mwsc_answer_generation
'1101': task492_mwsc_incorrect_answer_generation
'1102': task493_review_polarity_classification
'1103': task494_review_polarity_answer_generation
'1104': task495_semeval_headline_classification
'1105': task496_semeval_answer_generation
'1106': task497_extract_all_numbers_from_list_in_order
'1107': task498_scruples_anecdotes_whoiswrong_classification
'1108': task499_extract_and_add_all_numbers_from_list
'1109': task500_scruples_anecdotes_title_generation
'1110': task501_scruples_anecdotes_post_type_verification
'1111': task502_scruples_anecdotes_whoiswrong_verification
'1112': task503_scruples_anecdotes_isanswerable
'1113': task504_count_all_alphabetical_elements_in_list
'1114': task505_count_all_numerical_elements_in_list
'1115': task506_position_of_all_alphabetical_elements_in_list
'1116': task507_position_of_all_numerical_elements_in_list
'1117': task508_scruples_dilemmas_more_ethical_isidentifiable
'1118': task509_collate_of_all_alphabetical_and_numerical_elements_in_list_separately
'1119': task510_reddit_tifu_title_summarization
'1120': task511_reddit_tifu_long_text_summarization
'1121': task512_twitter_emotion_classification
'1122': task513_argument_stance_classification
'1123': task514_argument_consequence_classification
'1124': task515_senteval_odd_word_out
'1125': task516_senteval_conjoints_inversion
'1126': task517_emo_classify_emotion_of_dialogue
'1127': task518_emo_different_dialogue_emotions
'1128': task519_aquamuse_question_generation
'1129': task520_aquamuse_answer_given_in_passage
'1130': task521_trivia_question_classification
'1131': task523_find_if_numbers_or_alphabets_are_more_in_list
'1132': task524_parsinlu_food_aspect_classification
'1133': task525_parsinlu_movie_aspect_classification
'1134': task526_parsinlu_movie_overal_classification
'1135': task527_parsinlu_food_overal_classification
'1136': task528_parsinlu_movie_aspect_detection
'1137': task529_parsinlu_food_aspect_detection
'1138': task530_europarl_en_es_translation
'1139': task531_europarl_es_en_translation
'1140': task532_europarl_en-es_classification
'1141': task533_europarl_es-en_language_identification
'1142': task534_farstail_entailment
'1143': task535_alt_translation_ch_en
'1144': task536_alt_translation_vi_en
'1145': task537_alt_translation_th_en
'1146': task538_alt_translation_bu_en
'1147': task539_alt_translation_ma_en
'1148': task540_alt_translation_la_en
'1149': task541_alt_translation_kh_en
'1150': task542_alt_translation_ja_en
'1151': task543_alt_translation_bh_en
'1152': task544_alt_translation_hi_en
'1153': task545_alt_translation_fi_en
'1154': task546_alt_translation_bg_en
'1155': task547_alt_translation_entk_en
'1156': task548_alt_translation_en_ch
'1157': task549_alt_translation_en_vi
'1158': task550_discofuse_sentence_generation
'1159': task551_alt_translation_en_th
'1160': task552_alt_translation_en_bu
'1161': task553_alt_translation_en_ma
'1162': task554_alt_translation_en_la
'1163': task555_alt_translation_en_kh
'1164': task556_alt_translation_en_ja
'1165': task557_alt_translation_en_ba
'1166': task558_alt_translation_en_hi
'1167': task559_alt_translation_en_fi
'1168': task560_alt_translation_en_entk
'1169': task561_alt_translation_en_bg
'1170': task562_alt_language_identification
'1171': task563_discofuse_answer_generation
'1172': task564_discofuse_classification
'1173': task565_circa_answer_generation
'1174': task566_circa_classification
'1175': task567_circa_text_generation
'1176': task568_circa_question_generation
'1177': task569_recipe_nlg_text_generation
'1178': task570_recipe_nlg_ner_generation
'1179': task571_recipe_nlg_ner_generation
'1180': task572_recipe_nlg_text_generation
'1181': task573_air_dialogue_classification
'1182': task574_air_dialogue_sentence_generation
'1183': task575_air_dialogue_classification
'1184': task576_curiosity_dialogs_answer_generation
'1185': task577_curiosity_dialogs_classification
'1186': task578_curiosity_dialogs_answer_generation
'1187': task579_socialiqa_classification
'1188': task580_socialiqa_answer_generation
'1189': task581_socialiqa_question_generation
'1190': task582_naturalquestion_answer_generation
'1191': task585_preposition_classification
'1192': task586_amazonfood_polarity_classification
'1193': task587_amazonfood_polarity_correction_classification
'1194': task588_amazonfood_rating_classification
'1195': task589_amazonfood_summary_text_generation
'1196': task590_amazonfood_summary_correction_classification
'1197': task591_sciq_answer_generation
'1198': task592_sciq_incorrect_answer_generation
'1199': task593_sciq_explanation_generation
'1200': task594_sciq_question_generation
'1201': task595_mocha_answer_generation
'1202': task596_mocha_question_generation
'1203': task597_cuad_answer_generation
'1204': task598_cuad_answer_generation
'1205': task599_cuad_question_generation
'1206': task600_find_the_longest_common_substring_in_two_strings
'1207': task601_flores_translation_sntoen
'1208': task602_wikitext-103_answer_generation
'1209': task603_wikitext-103_fill_in_the_blank
'1210': task604_flores_translation_entosn
'1211': task605_find_the_longest_common_subsequence_in_two_lists
'1212': task606_sum_of_all_numbers_in_list_between_positions_i_and_j
'1213': task607_sbic_intentional_offense_binary_classification
'1214': task608_sbic_sexual_offense_binary_classification
'1215': task609_sbic_potentially_offense_binary_classification
'1216': task610_conllpp_ner
'1217': task611_mutual_multi_turn_dialogue
'1218': task612_yorubabbc_classification
'1219': task613_politifact_text_generation
'1220': task614_glucose_cause_event_detection
'1221': task615_moviesqa_answer_generation
'1222': task616_cola_classification
'1223': task617_amazonreview_category_text_generation
'1224': task618_amazonreview_summary_text_generation
'1225': task619_ohsumed_abstract_title_generation
'1226': task620_ohsumed_medical_subject_headings_answer_generation
'1227': task621_ohsumed_yes_no_numerical_answer_generation
'1228': task622_replace_alphabets_in_a_list_by_their_position_in_english_alphabet
'1229': task623_ohsumed_yes_no_answer_generation
'1230': task624_ohsumed_question_answering
'1231': task625_xlwic_true_or_false_answer_generation
'1232': task626_xlwic_sentence_based_on_given_word_sentence_generation
'1233': task627_xlwic_word_with_same_meaning_sentence_generation
'1234': task628_xlwic_word_with_different_meaning_sentence_generation
'1235': task629_dbpedia_14_classification
'1236': task630_dbpedia_14_classification
'1237': task631_dbpedia_14_incorrect_answer_generation
'1238': task632_dbpedia_14_classification
'1239': task633_dbpedia_14_answer_generation
'1240': task634_allegro_reviews_classification
'1241': task635_allegro_reviews_answer_generation
'1242': task636_extract_and_sort_unique_alphabets_in_a_list
'1243': task637_extract_and_sort_unique_digits_in_a_list
'1244': task638_multi_woz_classification
'1245': task639_multi_woz_user_utterance_generation
'1246': task640_esnli_classification
'1247': task641_esnli_classification
'1248': task642_esnli_classification
'1249': task643_refresd_classification
'1250': task644_refresd_translation
'1251': task645_summarization
'1252': task646_answer_generation
'1253': task647_answer_generation
'1254': task648_answer_generation
'1255': task649_race_blank_question_generation
'1256': task650_opus100_ar_en_translation
'1257': task651_opus100_en_ar_translation
'1258': task652_parsinlu_en_fa_translation
'1259': task653_parsinlu_fa_en_translation
'1260': task654_bible_fa_en_translation
'1261': task655_bible_en_fa_translation
'1262': task656_quran_en_fa_translation
'1263': task657_quran_fa_en_translation
'1264': task658_tep_en_fa_translation
'1265': task659_tep_fa_en_translation
'1266': task660_mizan_fa_en_translation
'1267': task661_mizan_en_fa_translation
'1268': task662_global_voices_fa_en_translation
'1269': task663_global_voices_en_fa_translation
'1270': task668_extreme_abstract_summarization
'1271': task669_ambigqa_answer_generation
'1272': task670_ambigqa_question_generation
'1273': task671_ambigqa_text_generation
'1274': task672_nummersense
'1275': task673_google_wellformed_query_classification
'1276': task674_google_wellformed_query_sentence_generation
'1277': task675_google_wellformed_query_sentence_generation
'1278': task676_ollie_relationship_answer_generation
'1279': task677_ollie_sentence_answer_generation
'1280': task678_ollie_actual_relationship_answer_generation
'1281': task679_hope_edi_english_text_classification
'1282': task680_hope_edi_tamil_text_classification
'1283': task681_hope_edi_malayalam_text_classification
'1284': task682_online_privacy_policy_text_classification
'1285': task683_online_privacy_policy_text_purpose_answer_generation
'1286': task684_online_privacy_policy_text_information_type_generation
'1287': task738_perspectrum_classification
'1288': task739_lhoestq_question_generation
'1289': task740_lhoestq_answer_generation_quantity
'1290': task741_lhoestq_answer_generation_place
'1291': task742_lhoestq_answer_generation_frequency
'1292': task743_eurlex_summarization
'1293': task744_eurlex_classification
'1294': task745_ai2_arithmetic_questions_arithmetic
'1295': task746_yelp_restaurant_review_classification
'1296': task747_glucose_cause_emotion_detection
'1297': task748_glucose_reverse_cause_event_detection
'1298': task749_glucose_reverse_cause_emotion_detection
'1299': task750_aqua_multiple_choice_answering
'1300': task751_svamp_subtraction_question_answering
'1301': task752_svamp_multiplication_question_answering
'1302': task753_svamp_addition_question_answering
'1303': task754_svamp_common-division_question_answering
'1304': task755_find_longest_substring_and_replace_its_sorted_lowercase_version_in_both_lists
'1305': task756_find_longert_substring_and_return_all_unique_alphabets_in_it
'1306': task757_msr_sqa_question_generation
'1307': task758_msr_sqa_question_answer_generation
'1308': task759_msr_sqa_incorrect_answer_generation
'1309': task761_app_review_classification
'1310': task762_emea_fr_sk_translation
'1311': task763_emea_es_lt_translation
'1312': task764_emea_bg_el_classification
'1313': task765_emea_bg_el_translation
'1314': task766_craigslist_bargains_classification
'1315': task767_craigslist_bargains_classification
'1316': task768_qed_text_span_selection
'1317': task769_qed_summarization
'1318': task770_pawsx_english_text_modification
'1319': task771_pawsx_korean_text_modification
'1320': task772_pawsx_french_text_modification
'1321': task773_pawsx_spanish_text_modification
'1322': task774_pawsx_german_text_modification
'1323': task775_pawsx_chinese_text_modification
'1324': task776_pawsx_japanese_text_modification
'1325': task777_pawsx_english_korean_translation
'1326': task778_pawsx_english_french_translation
'1327': task779_pawsx_english_spanish_translation
'1328': task780_pawsx_english_german_translation
'1329': task781_pawsx_english_chinese_translation
'1330': task782_pawsx_english_japanese_translation
'1331': task783_pawsx_korean_english_translation
'1332': task784_pawsx_korean_french_translation
'1333': task785_pawsx_korean_spanish_translation
'1334': task786_pawsx_korean_german_translation
'1335': task787_pawsx_korean_chinese_translation
'1336': task788_pawsx_korean_japanese_translation
'1337': task789_pawsx_french_english_translation
'1338': task790_pawsx_french_korean_translation
'1339': task791_pawsx_french_spanish_translation
'1340': task792_pawsx_french_german_translation
'1341': task793_pawsx_french_chinese_translation
'1342': task794_pawsx_french_japanese_translation
'1343': task795_pawsx_spanish_english_translation
'1344': task796_pawsx_spanish_korean_translation
'1345': task797_pawsx_spanish_french_translation
'1346': task798_pawsx_spanish_german_translation
'1347': task799_pawsx_spanish_chinese_translation
'1348': task800_pawsx_spanish_japanese_translation
'1349': task801_pawsx_german_english_translation
'1350': task802_pawsx_german_korean_translation
'1351': task803_pawsx_german_french_translation
'1352': task804_pawsx_german_spanish_translation
'1353': task805_pawsx_german_chinese_translation
'1354': task806_pawsx_german_japanese_translation
'1355': task807_pawsx_chinese_english_translation
'1356': task808_pawsx_chinese_korean_translation
'1357': task809_pawsx_chinese_french_translation
'1358': task810_pawsx_chinese_spanish_translation
'1359': task811_pawsx_chinese_german_translation
'1360': task812_pawsx_chinese_japanese_translation
'1361': task813_pawsx_japanese_english_translation
'1362': task814_pawsx_japanese_korean_translation
'1363': task815_pawsx_japanese_french_translation
'1364': task816_pawsx_japanese_spanish_translation
'1365': task817_pawsx_japanese_german_translation
'1366': task818_pawsx_japanese_chinese_translation
'1367': task819_pec_sentiment_classification
'1368': task820_protoqa_answer_generation
'1369': task821_protoqa_question_generation
'1370': task823_peixian-rtgender_sentiment_analysis
'1371': task827_copa_commonsense_reasoning
'1372': task828_copa_commonsense_cause_effect
'1373': task829_giga_fren_translation
'1374': task830_poleval2019_mt_translation
'1375': task831_giga_fren_classification
'1376': task832_poleval2019_mt_classification
'1377': task833_poem_sentiment_classification
'1378': task834_mathdataset_classification
'1379': task835_mathdataset_answer_generation
'1380': task836_viquiquad_question_generation
'1381': task837_viquiquad_answer_generation
'1382': task838_cdt_classification
'1383': task839_cdt_classification
'1384': task840_para_pdt_en_es_translation
'1385': task841_para_pdt_de_en_translation
'1386': task842_para_pdt_cs_en_translation
'1387': task843_financial_phrasebank_classification
'1388': task844_financial_phrasebank_classification
'1389': task845_pubmedqa_question_generation
'1390': task846_pubmedqa_classification
'1391': task847_pubmedqa_question_generation
'1392': task848_pubmedqa_classification
'1393': task849_pubmedqa_answer_generation
'1394': task850_synthetic_longest_palindrome
'1395': task851_synthetic_multiply_evens
'1396': task852_synthetic_multiply_odds
'1397': task853_hippocorpus_long_text_generation
'1398': task854_hippocorpus_classification
'1399': task855_conv_ai_2_classification
'1400': task856_conv_ai_2_classification
'1401': task857_inquisitive_question_generation
'1402': task858_inquisitive_span_detection
'1403': task859_prost_question_generation
'1404': task860_prost_mcq_generation
'1405': task861_asdiv_addsub_question_answering
'1406': task861_prost_mcq_answers_generation
'1407': task862_asdiv_multidiv_question_answering
'1408': task863_asdiv_multiop_question_answering
'1409': task864_asdiv_singleop_question_answering
'1410': task865_mawps_addsub_question_answering
'1411': task866_mawps_multidiv_question_answering
'1412': task867_mawps_multiop_question_answering
'1413': task868_cfq_mcd1_explanation_to_sql
'1414': task868_mawps_singleop_question_answering
'1415': task872_opus_xhosanavy_translation_eng_xhosa
'1416': task873_opus_xhosanavy_translation_xhosa_eng
'1417': task874_opus_xhosanavy_sr
'1418': task875_emotion_classification
'1419': task877_kde4_translation
'1420': task878_kde4_translation
'1421': task879_schema_guided_dstc8_classification
'1422': task880_schema_guided_dstc8_classification
'1423': task881_schema_guided_dstc8_classification
'1424': task886_quail_question_generation
'1425': task887_quail_answer_generation
'1426': task888_reviews_classification
'1427': task889_goemotions_classification
'1428': task890_gcwd_classification
'1429': task891_gap_coreference_resolution
'1430': task892_gap_reverse_coreference_resolution
'1431': task893_gap_fill_the_blank_coreference_resolution
'1432': task896_miam_language_classification
'1433': task897_freebase_qa_topic_question_generation
'1434': task898_freebase_qa_answer_generation
'1435': task899_freebase_qa_topic_generation
'1436': task900_freebase_qa_category_classification
'1437': task901_freebase_qa_category_question_generation
'1438': task902_deceptive_opinion_spam_classification
'1439': task903_deceptive_opinion_spam_classification
'1440': task904_hate_speech_offensive_classification
'1441': task905_hate_speech_offensive_classification
'1442': task906_dialogre_identify_names
'1443': task907_dialogre_identify_relationships
'1444': task908_dialogre_identify_familial_relationships
'1445': task909_dialogre_prevalent_speakers
'1446': task910_bianet_classification
'1447': task911_bianet_translation
'1448': task912_bianet_classification
'1449': task913_bianet_translation
'1450': task914_bianet_translation
'1451': task917_coqa_question_generation
'1452': task918_coqa_answer_generation
'1453': task919_coqa_incorrect_answer_generation
'1454': task921_code_x_glue_information_retreival
'1455': task922_event2mind_word_generation
'1456': task923_event2mind_classifier
'1457': task924_event2mind_word_generation
'1458': task925_coached_conv_pref_classifier
'1459': task926_coached_conv_pref_word_generation
'1460': task927_yelp_negative_to_positive_style_transfer
'1461': task928_yelp_positive_to_negative_style_transfer
'1462': task929_products_reviews_classification
'1463': task930_dailydialog_classification
'1464': task931_dailydialog_classification
'1465': task932_dailydialog_classification
'1466': task933_wiki_auto_style_transfer
'1467': task934_turk_simplification
'1468': task935_defeasible_nli_atomic_classification
'1469': task936_defeasible_nli_snli_classification
'1470': task937_defeasible_nli_social_classification
'1471': task938_copa_hi_commonsense_reasoning
'1472': task939_copa_hi_commonsense_cause_effect
'1473': task940_copa_gu_commonsense_reasoning
'1474': task941_copa_gu_commonsense_cause_effect
'1475': task942_copa_mr_commonsense_reasoning
'1476': task943_copa_mr_commonsense_cause_effect
'1477': task944_wiki_cloze_as_multiple_choice_question_answering
'1478': task945_wiki_cloze_bn_multiple_choice_question_answering
'1479': task946_wiki_cloze_gu_multiple_choice_question_answering
'1480': task947_wiki_cloze_hi_multiple_choice_question_answering
'1481': task948_wiki_cloze_kn_multiple_choice_question_answering
'1482': task949_wiki_cloze_ml_multiple_choice_question_answering
'1483': task950_wiki_cloze_mr_multiple_choice_question_answering
'1484': task951_wiki_cloze_or_multiple_choice_question_answering
'1485': task952_wiki_cloze_pa_multiple_choice_question_answering
'1486': task953_wiki_cloze_ta_multiple_choice_question_answering
'1487': task954_wiki_cloze_te_multiple_choice_question_answering
'1488': task955_wiki_auto_style_transfer
'1489': task956_leetcode_420_strong_password_check
'1490': task957_e2e_nlg_text_generation_generate
'1491': task958_e2e_nlg_text_generation_parse
'1492': task959_e2e_nlg_text_generation_identify
'1493': task960_ancora-ca-ner_named_entity_recognition
'1494': task961_ancora-ca-ner_text_auto_completion
'1495': task962_ancora-ca-ner_missing_word_prediction
'1496': task963_librispeech_asr_next_word_prediction
'1497': task964_librispeech_asr_text_auto_completion
'1498': task965_librispeech_asr_missing_word_prediction
'1499': task966_ruletaker_fact_checking_based_on_given_context
'1500': task967_ruletaker_incorrect_fact_generation_based_on_given_paragraph
'1501': task968_xcopa_commonsense_reasoning_et
'1502': task969_xcopa_commonsense_cause_effect_et
'1503': task970_sherliic_causal_relationship
'1504': task976_pib_indian_language_identification
'1505': task977_pib_translation_oriya_urdu
'1506': task978_pib_translation_urdu_oriya
'1507': task979_pib_translation_malayalam_oriya
'1508': task980_pib_translation_oriya_malayalam
'1509': task981_pib_translation_bengali_tamil
'1510': task982_pib_translation_tamil_bengali
'1511': task983_pib_translation_gujarati_marathi
'1512': task984_pib_translation_marathi_gujarati
'1513': task985_pib_translation_hindi_oriya
'1514': task986_pib_translation_oriya_hindi
'1515': task987_pib_translation_english_oriya
'1516': task988_pib_translation_oriya_english
'1517': task989_pib_translation_marathi_urdu
'1518': task990_pib_translation_urdu_marathi
'1519': task991_pib_translation_english_tamil
'1520': task992_pib_translation_tamil_english
'1521': task993_pib_translation_hindi_tamil
'1522': task994_pib_translation_tamil_hindi
'1523': task995_pib_translation_bengali_english
'1524': task996_pib_translation_english_bengali
'1525': task997_pib_translation_bengali_oriya
'1526': task998_pib_translation_oriya_bengali
'1527': task999_pib_translation_malayalam_tamil
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 6446252823.937876
num_examples: 8297033
- name: validation
num_bytes: 65114120.06212454
num_examples: 83809
download_size: 3784112593
dataset_size: 6511366944.0
---
# Dataset Card for "niv2-submix-mistral-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
salsarra/AQAD_SPLIT_W | 2023-10-09T04:42:36.000Z | [
"region:us"
] | salsarra | null | null | null | 0 | 7 | Entry not found |
Ayansk11/llama2_legal | 2023-10-08T17:52:41.000Z | [
"region:us"
] | Ayansk11 | null | null | null | 0 | 7 | Entry not found |
Hariharavarshan/Assessment | 2023-10-09T00:11:24.000Z | [
"region:us"
] | Hariharavarshan | null | null | null | 0 | 7 | Entry not found |
JzJd/post-test | 2023-10-09T01:38:38.000Z | [
"license:afl-3.0",
"region:us"
] | JzJd | null | null | null | 0 | 7 | ---
license: afl-3.0
---
|
benayas/snips_llm | 2023-10-09T01:40:59.000Z | [
"region:us"
] | benayas | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 2310806
num_examples: 13084
- name: test
num_bytes: 248670
num_examples: 1400
download_size: 546576
dataset_size: 2559476
---
# Dataset Card for "snips_llm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-merged | 2023-10-09T02:15:54.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 271212149
num_examples: 482055
download_size: 142626540
dataset_size: 271212149
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/end_sur_DA_tokenized | 2023-10-09T03:55:21.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 127709805
num_examples: 160590
download_size: 27943074
dataset_size: 127709805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "end_sur_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-Chunk-64-testset-biencoder-data-90_10 | 2023-10-09T04:29:10.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 13595044
num_examples: 203
download_size: 459357
dataset_size: 13595044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-Chunk-64-testset-biencoder-data-90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
midojiang/frist-dataset | 2023-10-10T03:14:27.000Z | [
"region:us"
] | midojiang | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADONIS
'1': AFRICAN GIANT SWALLOWTAIL
'2': AMERICAN SNOOT
splits:
- name: train
num_bytes: 8825732.0
num_examples: 338
download_size: 8823395
dataset_size: 8825732.0
---
# Dataset Card for "input-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ngarneau/fm_queries | 2023-10-09T14:44:18.000Z | [
"region:us"
] | ngarneau | null | null | null | 0 | 7 | Entry not found |
mrabhi0505/instruction_output_dataset3 | 2023-10-09T08:34:54.000Z | [
"region:us"
] | mrabhi0505 | null | null | null | 0 | 7 | Entry not found |
Malmika/ict_text_dataset | 2023-10-09T17:19:25.000Z | [
"region:us"
] | Malmika | null | null | null | 0 | 7 | Entry not found |
dummybrendan/animals | 2023-10-09T17:25:56.000Z | [
"license:mit",
"region:us"
] | dummybrendan | null | null | null | 0 | 7 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 690375299.39
num_examples: 5399
download_size: 696333284
dataset_size: 690375299.39
---
|
ContextualAI/tiny-winogrande_xl | 2023-10-09T19:41:43.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 13725
num_examples: 100
download_size: 10505
dataset_size: 13725
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-winogrande_xl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hmao/rule_learning_data_v1 | 2023-10-10T16:29:42.000Z | [
"region:us"
] | hmao | null | null | null | 0 | 7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: rule
dtype: string
- name: task_name
dtype: string
- name: configuration
dtype: string
- name: description
dtype: string
- name: filepath
dtype: string
- name: old_instruction
dtype: string
- name: prompt
dtype: string
- name: 'codellama/CodeLlama-34b-hf---{"do_sample": false, "max_new_tokens": 256,
"truncate": 15744, "return_full_text": false}'
dtype: string
splits:
- name: train
num_bytes: 7650436
num_examples: 2009
download_size: 2660984
dataset_size: 7650436
---
# Dataset Card for "rule_learning_data_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ascent_kb | 2022-11-03T16:30:39.000Z | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"knowledge-base",
"arxiv:2011.00905",
"region:us"
] | null | This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline (https://ascent.mpi-inf.mpg.de/). | @InProceedings{nguyen2021www,
title={Advanced Semantics for Commonsense Knowledge Extraction},
author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
year={2021},
booktitle={The Web Conference 2021},
} | null | 2 | 6 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: ascentkb
pretty_name: Ascent KB
tags:
- knowledge-base
dataset_info:
- config_name: canonical
features:
- name: arg1
dtype: string
- name: rel
dtype: string
- name: arg2
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2976697816
num_examples: 8904060
download_size: 710727536
dataset_size: 2976697816
- config_name: open
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2882678298
num_examples: 8904060
download_size: 710727536
dataset_size: 2882678298
---
# Dataset Card for Ascent KB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ascent.mpi-inf.mpg.de/
- **Repository:** https://github.com/phongnt570/ascent
- **Paper:** https://arxiv.org/abs/2011.00905
- **Point of Contact:** http://tuan-phong.com
### Dataset Summary
This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.
The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).
For more details, take a look at
[the research paper](https://arxiv.org/abs/2011.00905) and
[the website](https://ascent.mpi-inf.mpg.de).
### Supported Tasks and Leaderboards
The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
There are two configurations available for this dataset:
1. `canonical` (default): This part contains `<arg1 ; rel ; arg2>`
assertions where the relations (`rel`) were mapped to
[ConceptNet relations](https://github.com/commonsense/conceptnet5/wiki/Relations)
with slight modifications:
- Introducing 2 new relations: `/r/HasSubgroup`, `/r/HasAspect`.
- All `/r/HasA` relations were replaced with `/r/HasAspect`.
This is motivated by the [ATOMIC-2020](https://allenai.org/data/atomic-2020)
schema, although they grouped all `/r/HasA` and
`/r/HasProperty` into `/r/HasProperty`.
- The `/r/UsedFor` relation was replaced with `/r/ObjectUse`
which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.).
This is also taken from ATOMIC-2020.
2. `open`: This part contains open assertions of the form
`<subject ; predicate ; object>` extracted directly from web
contents. This is the original form of the `canonical` triples.
In both configurations, each assertion is equipped with
extra information including: a set of semantic `facets`
(e.g., *LOCATION*, *TEMPORAL*, etc.), its `support` (i.e., number of occurrences),
and a list of `source_sentences`.
An example row in the `canonical` configuration:
```JSON
{
"arg1": "elephant",
"rel": "/r/HasProperty",
"arg2": "intelligent",
"support": 15,
"facets": [
{
"value": "extremely",
"type": "DEGREE",
"support": 11
}
],
"source_sentences": [
{
"text": "Elephants are extremely intelligent animals.",
"source": "https://www.softschools.com/facts/animals/asian_elephant_facts/2310/"
},
{
"text": "Elephants are extremely intelligent creatures and an elephant's brain can weigh as much as 4-6 kg.",
"source": "https://www.elephantsforafrica.org/elephant-facts/"
}
]
}
```
### Data Fields
- **For `canonical` configuration**
- `arg1`: the first argument to the relationship, e.g., *elephant*
- `rel`: the canonical relation, e.g., */r/HasProperty*
- `arg2`: the second argument to the relationship, e.g., *intelligence*
- `support`: the number of occurrences of the assertion, e.g., *15*
- `facets`: an array of semantic facets, each contains
- `value`: facet value, e.g., *extremely*
- `type`: facet type, e.g., *DEGREE*
- `support`: the number of occurrences of the facet, e.g., *11*
- `source_sentences`: an array of source sentences from which the assertion was
extracted, each contains
- `text`: the raw text of the sentence
- `source`: the URL to its parent document
- **For `open` configuration**
- The fields of this configuration are the same as the `canonical`
configuration's, except that
the (`arg1`, `rel`, `arg2`) fields are replaced with the
(`subject`, `predicate`, `object`) fields
which are free
text phrases extracted directly from the source sentences
using an Open Information Extraction (OpenIE) tool.
### Data Splits
There are no splits. All data points come to a default split called `train`.
## Dataset Creation
### Curation Rationale
The commonsense knowledge base was created to assist in development of robust and reliable AI.
### Source Data
#### Initial Data Collection and Normalization
Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.
The assertions were then grouped into semantically equivalent clusters.
Take a look at the research paper for more details.
#### Who are the source language producers?
Web users.
### Annotations
#### Annotation process
None.
#### Who are the annotators?
None.
### Personal and Sensitive Information
Unknown.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The knowledge base has been developed by researchers at the
[Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
Contact [Tuan-Phong Nguyen](http://tuan-phong.com) in case of questions and comments.
### Licensing Information
[The Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@InProceedings{nguyen2021www,
title={Advanced Semantics for Commonsense Knowledge Extraction},
author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
year={2021},
booktitle={The Web Conference 2021},
}
```
### Contributions
Thanks to [@phongnt570](https://github.com/phongnt570) for adding this dataset. |
covid_tweets_japanese | 2023-01-25T14:28:47.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ja",
"license:cc-by-nd-4.0",
"region:us"
] | null | 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example. | No paper about this dataset is published yet. Please cite this dataset as "鈴木 優: COVID-19 日本語 Twitter データセット (http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/)" | null | 1 | 6 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ja
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)
dataset_info:
features:
- name: tweet_id
dtype: string
- name: assessment_option_id
dtype:
class_label:
names:
'0': '63'
'1': '64'
'2': '65'
'3': '66'
'4': '67'
'5': '68'
splits:
- name: train
num_bytes: 1662833
num_examples: 53639
download_size: 406005
dataset_size: 1662833
---
# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COVID-19 日本語Twitterデータセット homepage](http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1)
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** Check the homepage.
### Dataset Summary
53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.
### Supported Tasks and Leaderboards
Text-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion.
### Languages
The text can be gotten using the IDs in this dataset is Japanese, posted on Twitter.
## Dataset Structure
### Data Instances
CSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID.
### Data Fields
- `tweet_id`: Twitter ID.
- `assessment_option_id`: The selection result. It has the following meanings:
- 63: a general fact: generally published information, such as news.
- 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news.
- 65: an opinion/feeling
- 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not "67: unrelated", but 63, 64, 65 cannot be determined)
- 67: unrelated
- 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67).
### Data Splits
No articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided.
## Dataset Creation
### Curation Rationale
[More Information Needed] because the paper is not yet published.
### Source Data
#### Initial Data Collection and Normalization
53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020.
#### Who are the source language producers?
The language producers are users of Twitter.
### Annotations
#### Annotation process
The annotation is by majority decision by 5 - 10 crowd workers.
#### Who are the annotators?
Crowd workers.
### Personal and Sensitive Information
The author does not contain original tweets.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is hosted by Suzuki Laboratory, Gifu University, Japan.
### Licensing Information
CC-BY-ND 4.0
### Citation Information
A related paper has not yet published.
The author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1 ) 」.
### Contributions
Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset. |
diplomacy_detection | 2023-01-25T14:29:25.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | null | @inproceedings{peskov-etal-2020-takes,
title = "It Takes Two to Lie: One to Lie, and One to Listen",
author = "Peskov, Denis and
Cheng, Benny and
Elgohary, Ahmed and
Barrow, Joe and
Danescu-Niculescu-Mizil, Cristian and
Boyd-Graber, Jordan",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.353",
doi = "10.18653/v1/2020.acl-main.353",
pages = "3811--3854",
abstract = "Trust is implicit in many online text conversations{---}striking up new friendships, or asking for tech support. But trust can be betrayed through deception. We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alliances with each other. Our study with players from the Diplomacy community gathers 17,289 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness. Unlike existing datasets, this captures deception in long-lasting relationships, where the interlocutors strategically combine truth with lies to advance objectives. A model that uses power dynamics and conversational contexts can predict when a lie occurs nearly as well as human players.",
} | null | 0 | 6 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
pretty_name: HateOffensive
dataset_info:
features:
- name: messages
sequence: string
- name: sender_labels
sequence:
class_label:
names:
'0': 'false'
'1': 'true'
- name: receiver_labels
sequence:
class_label:
names:
'0': 'false'
'1': 'true'
'2': noannotation
- name: speakers
sequence:
class_label:
names:
'0': italy
'1': turkey
'2': russia
'3': england
'4': austria
'5': germany
'6': france
- name: receivers
sequence:
class_label:
names:
'0': italy
'1': turkey
'2': russia
'3': england
'4': austria
'5': germany
'6': france
- name: absolute_message_index
sequence: int64
- name: relative_message_index
sequence: int64
- name: seasons
sequence:
class_label:
names:
'0': spring
'1': fall
'2': winter
'3': Spring
'4': Fall
'5': Winter
- name: years
sequence:
class_label:
names:
'0': '1901'
'1': '1902'
'2': '1903'
'3': '1904'
'4': '1905'
'5': '1906'
'6': '1907'
'7': '1908'
'8': '1909'
'9': '1910'
'10': '1911'
'11': '1912'
'12': '1913'
'13': '1914'
'14': '1915'
'15': '1916'
'16': '1917'
'17': '1918'
- name: game_score
sequence:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
- name: game_score_delta
sequence:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '-1'
'20': '-2'
'21': '-3'
'22': '-4'
'23': '-5'
'24': '-6'
'25': '-7'
'26': '-8'
'27': '-9'
'28': '-10'
'29': '-11'
'30': '-12'
'31': '-13'
'32': '-14'
'33': '-15'
'34': '-16'
'35': '-17'
'36': '-18'
- name: players
sequence:
class_label:
names:
'0': italy
'1': turkey
'2': russia
'3': england
'4': austria
'5': germany
'6': france
- name: game_id
dtype: int64
splits:
- name: validation
num_bytes: 254344
num_examples: 21
- name: train
num_bytes: 2539778
num_examples: 189
- name: test
num_bytes: 506191
num_examples: 42
download_size: 3208706
dataset_size: 3300313
---
# Dataset Card for HateOffensive
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage** : https://sites.google.com/view/qanta/projects/diplomacy
- **Repository** : https://github.com/DenisPeskov/2020_acl_diplomacy
- **Paper** : http://users.umiacs.umd.edu/~jbg/docs/2020_acl_diplomacy.pdf
- **Leaderboard** :
- **Point of Contact** :
### Dataset Summary
This dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
```
{
"messages":
["Greetings Sultan!\n\nAs your neighbor I would like to propose an alliance! What are your views on the board so far?", "I think an alliance would be great! Perhaps a dmz in the Black Sea would be a good idea to solidify this alliance?\n\nAs for my views on the board, my first moves will be Western into the Balkans and Mediterranean Sea.", "Sounds good lets call a dmz in the black sea", "What's our move this year?", "I've been away from the game for a while", "Not sure yet, what are your thoughts?", "Well I'm pretty worried about Germany attacking me (and Austria to a lesser extent) so im headed west. It looks like Italy's landing a army in Syr this fall unless you can stop it", "That sounds good to me. I'll move to defend against Italy while you move west. If it's not too much too ask, I'd like to request that you withdraw your fleet from bla.", "Oh sorry missed the msg to move out of bl sea ill do that this turn. I did bring my army down into Armenia, To help you expel the Italian. It looks like Austria and Italy are working together. If we have a chance in the region you should probably use smy to protect con. We can't afford to lose con.", "I'll defend con from both ank and smy.", "Hey sorry for stabbing you earlier, it was an especially hard choice since Turkey is usually my country of choice. It's cool we got to do this study huh?"],
"sender_labels": [false, true, false, true, true, true, true, true, true, true, true],
"receiver_labels": [true, true, true, true, true, true, true, true, true, true, "NOANNOTATION"],
"speakers": ["russia", "turkey", "russia", "russia", "russia", "turkey", "russia", "turkey", "russia", "turkey", "russia"],
"receivers": ["turkey", "russia", "turkey", "turkey", "turkey", "russia", "turkey", "russia", "turkey", "russia", "turkey"],
"absolute_message_index": [78, 107, 145, 370, 371, 374, 415, 420, 495, 497, 717],
"relative_message_index": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"seasons": ["Spring", "Spring", "Spring", "Spring", "Spring", "Spring", "Fall", "Fall", "Spring", "Spring", "Fall"],
"years": ["1901", "1901", "1901", "1902", "1902", "1902", "1902", "1902", "1903", "1903", "1905"],
"game_score": ["4", "3", "4", "5", "5", "4", "5", "4", "5", "3", "7"],
"game_score_delta": ["1", "-1", "1", "1", "1", "-1", "1", "-1", "2", "-2", "7"],
"players": ["russia", "turkey"],
"game_id": 10
}
```
### Data Fields
- speakers: the sender of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)
- receivers: the receiver of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)
- messages: the raw message string (string format. ranges in length from one word to paragraphs in length)
- sender_labels: indicates if the sender of the message selected that the message is truthful, true, or deceptive, false. This is used for our ACTUAL_LIE calculation (true/false which can be bool or string format)
- receiver_labels: indicates if the receiver of the message selected that the message is perceived as truthful, true, or deceptive, false. In <10% of the cases, no annotation was received. This is used for our SUSPECTED_LIE calculation (string format. true/false/"NOANNOTATION" )
- game_score: the current game score---supply centers---of the sender (string format that ranges can range from 0 to 18)
- game_score_delta: the current game score---supply centers---of the sender minus the game score of the recipient (string format that ranges from -18 to 18)
- absolute_message_index: the index the message is in the entire game, across all dialogs (int format)
- relative_message_index: the index of the message in the current dialog (int format)
- seasons: the season in Diplomacy, associated with the year (string format. Spring, Fall, Winter)
- years: the year in Diplomacy, associated with the season (string format. 1901 through 1918)
- game_id: which of the 12 games the dialog comes from (int format ranging from 1 to 12)
### Data Splits
Train, Test and Validation splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
@inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020,
Title = {It Takes Two to Lie: One to Lie and One to Listen},
Author = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber},
Booktitle = {Association for Computational Linguistics},
Year = {2020},
Location = {Seattle},
}
### Contributions
Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789) for adding this dataset. |
disfl_qa | 2022-11-18T19:58:47.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2106.... | null | Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting,
namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018)
dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as
a source of distractors.
The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are
corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a
major gap between speech and NLP research community. We hope the dataset can serve as a benchmark dataset for
testing robustness of models against disfluent inputs.
Our expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from
Disfl-QA. Detailed experiments and analyses can be found in our paper. | @inproceedings{gupta-etal-2021-disflqa,
title = "{Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering}",
author = "Gupta, Aditya and Xu, Jiacheng and Upadhyay, Shyam and Yang, Diyi and Faruqui, Manaal",
booktitle = "Findings of ACL",
year = "2021"
} | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question
Answering'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
dataset_info:
features:
- name: squad_v2_id
dtype: string
- name: original question
dtype: string
- name: disfluent question
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 7712523
num_examples: 7182
- name: test
num_bytes: 3865097
num_examples: 3643
- name: validation
num_bytes: 1072731
num_examples: 1000
download_size: 48935038
dataset_size: 12650351
---
# Dataset Card for DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Disfl-QA](https://github.com/google-research-datasets/disfl-qa)
- **Paper:** [Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering](https://arxiv.org/pdf/2106.04016.pdf)
- **Point of Contact:** [disfl-qa team](disfl-qa@google.com)
### Dataset Summary
Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 ([Rajpurkar et al., 2018](https://www.aclweb.org/anthology/P18-2124/)) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors.
The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90\% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.
The expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the [paper](https://arxiv.org/pdf/2106.04016.pdf).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English only.
## Dataset Structure
### Data Instances
This example was too long and was cropped:
```
{
"answers": {
"answer_start": [94, 87, 94, 94],
"text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"]
},
"context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...",
"id": "56ddde6b9a695914005b9629",
"original question": "When were the Normans in Normandy?",
"disfluent question": "From which countries no tell me when were the Normans in Normandy?"
"title": "Normans"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `original question`: Original question from SQuAD-v2 (a `string` feature)
- `disfluent question`: Disfluent question from Disfl-QA (a `string` feature)
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
Disfl-QA consists of ~12k disfluent questions with the following train/dev/test splits:
| File | Questions |
|-----|-----|
|train.json | 7182 |
|dev.json | 1000 |
|test.json | 3643 |
## Dataset Creation
### Curation Rationale
The research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages.
### Source Data
#### Initial Data Collection and Normalization
DISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Each question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Disfl-QA dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{gupta-etal-2021-disflqa,
title = "{Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering}",
author = "Gupta, Aditya and Xu, Jiacheng and Upadhyay, Shyam and Yang, Diyi and Faruqui, Manaal",
booktitle = "Findings of ACL",
year = "2021"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
hate_speech_filipino | 2023-01-25T14:31:38.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-twitter-data-philippine-election",
"language:tl",
"license:un... | null | Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. | @article{Cabasag-2019-hate-speech,
title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},
author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},
journal={Philippine Computing Journal},
volume={XIV},
number={1},
month={August},
year={2019}
} | null | 4 | 6 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-twitter-data-philippine-election
task_categories:
- text-classification
task_ids:
- sentiment-analysis
pretty_name: Hate Speech in Filipino
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 995919
num_examples: 10000
- name: test
num_bytes: 995919
num_examples: 10000
- name: validation
num_bytes: 424365
num_examples: 4232
download_size: 822927
dataset_size: 2416203
---
# Dataset Card for Hate Speech in Filipino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [PCJ paper](https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular
## Dataset Structure
### Data Instances
Sample data:
```
{
"text": "Taas ni Mar Roxas ah. KULTONG DILAW NGA NAMAN",
"label": 1
}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
This study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context?
### Source Data
#### Initial Data Collection and Normalization
The dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016).
Data preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@article{Cabasag-2019-hate-speech,
title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},
author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},
journal={Philippine Computing Journal},
volume={XIV},
number={1},
month={August},
year={2019}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
ilist | 2023-01-25T14:32:46.000Z | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:awa",
"language:bho",
"language:bra",
"language:hi",
"language:mag",
"license:cc-by-4.0",
... | null | This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family –
Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri, and Magahi. | null | null | 0 | 6 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- awa
- bho
- bra
- hi
- mag
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ilist
tags:
- language-identification
dataset_info:
features:
- name: language_id
dtype:
class_label:
names:
'0': AWA
'1': BRA
'2': MAG
'3': BHO
'4': HIN
- name: text
dtype: string
splits:
- name: train
num_bytes: 14362998
num_examples: 70351
- name: test
num_bytes: 2146857
num_examples: 9692
- name: validation
num_bytes: 2407643
num_examples: 10329
download_size: 18284850
dataset_size: 18917498
---
# Dataset Card for ilist
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/kmi-linguistics/vardial2018
- **Paper:** [Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign](https://aclanthology.org/W18-3901/)
- **Leaderboard:**
- **Point of Contact:** linguistics.kmi@gmail.com
### Dataset Summary
This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family: Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi).
For this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Hindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi
## Dataset Structure
### Data Instances
```
{
"language_id": 4,
"text": 'तभी बारिश हुई थी जिसका गीलापन इन मूर्तियों को इन तस्वीरों में एक अलग रूप देता है .'
}
```
### Data Fields
- `text`: text which you want to classify
- `language_id`: label for the text as an integer from 0 to 4
The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BHO", "HIN".
### Data Splits
| | train | valid | test |
|----------------------|-------|-------|-------|
| # of input sentences | 70351 | 9692 | 10329 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The data for this task was collected from both hard printed and digital sources. Printed materials were
obtained from different institutions that promote these languages. We also gathered data from libraries,
as well as from local literary and cultural groups. We collected printed stories, novels and essays in
books, magazines, and newspapers.
#### Initial Data Collection and Normalization
We scanned the printed materials, then we performed OCR, and
finally we asked native speakers of the respective languages to correct the OCR output. Since there are
no specific OCR models available for these languages, we used the Google OCR for Hindi, part of the
Drive API. Since all the languages used the Devanagari script, we expected the OCR to work reasonably
well, and overall it did. We further managed to get some blogs in Magahi and Bhojpuri.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0/
### Citation Information
```
@inproceedings{zampieri-etal-2018-language,
title = "Language Identification and Morphosyntactic Tagging: The Second {V}ar{D}ial Evaluation Campaign",
author = {Zampieri, Marcos and
Malmasi, Shervin and
Nakov, Preslav and
Ali, Ahmed and
Shon, Suwon and
Glass, James and
Scherrer, Yves and
Samard{\v{z}}i{\'c}, Tanja and
Ljube{\v{s}}i{\'c}, Nikola and
Tiedemann, J{\"o}rg and
van der Lee, Chris and
Grondelaers, Stefan and
Oostdijk, Nelleke and
Speelman, Dirk and
van den Bosch, Antal and
Kumar, Ritesh and
Lahiri, Bornini and
Jain, Mayank},
booktitle = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)",
month = aug,
year = "2018",
address = "Santa Fe, New Mexico, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-3901",
pages = "1--17",
}
```
### Contributions
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset. |
isixhosa_ner_corpus | 2023-01-25T14:33:10.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:xh",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{isixhosa_ner_corpus,
author = {K. Podile and
Roald Eiselen},
title = {NCHLT isiXhosa Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/312},
} | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- xh
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: IsixhosaNerCorpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: isixhosa_ner_corpus
splits:
- name: train
num_bytes: 2414995
num_examples: 6284
download_size: 14513302
dataset_size: 2414995
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [IsiXhosa Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/312)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The isiXhosa Ner Corpus is a Xhosa dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Xhosa.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [7, 8, 5, 6, 0],
'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa']
}
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - Xhosa.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
[More Information Needed]
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{isixhosa_ner_corpus,
author = { K. Podile and
Roald Eiselen},
title = {NCHLT isiXhosa Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/312},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. |
menyo20k_mt | 2022-12-30T19:38:49.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:yo",
"license:cc-by-nc-4.0",
"arxiv:2103.08647",
"r... | null | MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain). The development and test sets are available upon request. | @dataset{david_ifeoluwa_adelani_2020_4297448,
author = {David Ifeoluwa Adelani and
Jesujoba O. Alabi and
Damilola Adebonojo and
Adesina Ayeni and
Mofe Adeyemi and
Ayodele Awokoya},
title = {MENYO-20k: A Multi-domain English - Yorùbá Corpus
for Machine Translation},
month = nov,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.4297448},
url = {https://doi.org/10.5281/zenodo.4297448}
} | null | 1 | 6 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- en
- yo
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: menyo-20k
pretty_name: MENYO-20k
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
config_name: menyo20k_mt
splits:
- name: train
num_bytes: 2551345
num_examples: 10070
- name: validation
num_bytes: 870011
num_examples: 3397
- name: test
num_bytes: 1905432
num_examples: 6633
download_size: 5206234
dataset_size: 5326788
---
# Dataset Card for MENYO-20k
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
- **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Languages are English and Yoruba.
## Dataset Structure
### Data Instances
An instance example:
```
{'translation':
{'en': 'Unit 1: What is Creative Commons?',
'yo': 'Ìdá 1: Kín ni Creative Commons?'
}
}
```
### Data Fields
- `translation`:
- `en`: English sentence.
- `yo`: Yoruba sentence.
### Data Splits
Training, validation and test splits are available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
### Citation Information
If you use this dataset, please cite this paper:
```
@inproceedings{adelani-etal-2021-effect,
title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
author = "Adelani, David and
Ruiter, Dana and
Alabi, Jesujoba and
Adebonojo, Damilola and
Ayeni, Adesina and
Adeyemi, Mofe and
Awokoya, Ayodele Esther and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-research.6",
pages = "61--75",
abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
|
multi_booked | 2023-06-01T14:59:47.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ca",
"language:eu",
"license:cc-by-3.0",
"arxiv:1803.08614"... | null | MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification.
The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is
an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and
word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two
annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the
guidelines set out in the OpeNER project. | @inproceedings{Barnes2018multibooked,
author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni},
title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)},
year = {2018},
month = {May},
date = {7-12},
address = {Miyazaki, Japan},
publisher = {European Language Resources Association (ELRA)},
language = {english}
} | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
- eu
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: multibooked
pretty_name: MultiBooked
dataset_info:
- config_name: ca
features:
- name: text
sequence:
- name: wid
dtype: string
- name: sent
dtype: string
- name: para
dtype: string
- name: word
dtype: string
- name: terms
sequence:
- name: tid
dtype: string
- name: lemma
dtype: string
- name: morphofeat
dtype: string
- name: pos
dtype: string
- name: target
sequence: string
- name: opinions
sequence:
- name: oid
dtype: string
- name: opinion_holder_target
sequence: string
- name: opinion_target_target
sequence: string
- name: opinion_expression_polarity
dtype:
class_label:
names:
'0': StrongNegative
'1': Negative
'2': Positive
'3': StrongPositive
- name: opinion_expression_target
sequence: string
splits:
- name: train
num_bytes: 1952731
num_examples: 567
download_size: 4429415
dataset_size: 1952731
- config_name: eu
features:
- name: text
sequence:
- name: wid
dtype: string
- name: sent
dtype: string
- name: para
dtype: string
- name: word
dtype: string
- name: terms
sequence:
- name: tid
dtype: string
- name: lemma
dtype: string
- name: morphofeat
dtype: string
- name: pos
dtype: string
- name: target
sequence: string
- name: opinions
sequence:
- name: oid
dtype: string
- name: opinion_holder_target
sequence: string
- name: opinion_target_target
sequence: string
- name: opinion_expression_polarity
dtype:
class_label:
names:
'0': StrongNegative
'1': Negative
'2': Positive
'3': StrongPositive
- name: opinion_expression_target
sequence: string
splits:
- name: train
num_bytes: 1175816
num_examples: 343
download_size: 4429415
dataset_size: 1175816
config_names:
- ca
- eu
---
# Dataset Card for MultiBooked
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://hdl.handle.net/10230/33928
- **Repository:** https://github.com/jerbarnes/multibooked
- **Paper:** https://arxiv.org/abs/1803.08614
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification.
The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is
an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and
word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two
annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the
guidelines set out in the OpeNER project.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Each sub-dataset is monolingual in the languages:
- ca: Catalan
- eu: Basque
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `text`: layer of the original text.
- `wid`: list of word IDs for each word within the example.
- `sent`: list of sentence IDs for each sentence within the example.
- `para`: list of paragraph IDs for each paragraph within the example.
- `word`: list of words.
- `terms`: layer of the terms resulting from the analysis of the original text (lemmatization, morphological,
PoS tagging)
- `tid`: list of term IDs for each term within the example.
- `lemma`: list of lemmas.
- `morphofeat`: list of morphological features.
- `pos`: list of PoS tags.
- `target`: list of sublists of the corresponding word IDs (normally, the sublists contain only one element,
in a one-to-one correspondence between words and terms).
- `opinions`: layer of the opinions in the text.
- `oid`: list of opinion IDs
- `opinion_holder_target`: list of sublists of the corresponding term IDs that span the opinion holder.
- `opinion_target_target`: list of sublists of the corresponding term IDs that span the opinion target.
- `opinion_expression_polarity`: list of the opinion expression polarities. The polarity can take one of the values:
`StrongNegative`, `Negative`, `Positive`, or `StrongPositive`.
- `opinion_expression_target`: list of sublists of the corresponding term IDs that span the opinion expression.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY 3.0](https://creativecommons.org/licenses/by/3.0/) license.
### Citation Information
```
@inproceedings{Barnes2018multibooked,
author={Barnes, Jeremy and Lambert, Patrik and Badia, Toni},
title={MultiBooked: A corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)},
year = {2018},
month = {May},
date = {7-12},
address = {Miyazaki, Japan},
publisher = {European Language Resources Association (ELRA)},
language = {english}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
para_pat | 2022-12-02T11:39:09.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:translation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K... | null | ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
This dataset contains the developed parallel corpus from the open access Google
Patents dataset in 74 language pairs, comprising more than 68 million sentences
and 800 million tokens. Sentences were automatically aligned using the Hunalign algorithm
for the largest 22 language pairs, while the others were abstract (i.e. paragraph) aligned. | @inproceedings{soares-etal-2020-parapat,
title = "{P}ara{P}at: The Multi-Million Sentences Parallel Corpus of Patents Abstracts",
author = "Soares, Felipe and
Stevenson, Mark and
Bartolome, Diego and
Zaretskaya, Anna",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.465",
pages = "3769--3774",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 9 | 6 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- cs
- de
- el
- en
- es
- fr
- hu
- ja
- ko
- pt
- ro
- ru
- sk
- uk
- zh
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- translation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: parapat
pretty_name: Parallel Corpus of Patents Abstracts
dataset_info:
- config_name: el-en
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 24818840
num_examples: 10855
download_size: 24894705
dataset_size: 24818840
- config_name: cs-en
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 117555722
num_examples: 78977
download_size: 118010340
dataset_size: 117555722
- config_name: en-hu
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 80637157
num_examples: 42629
download_size: 80893995
dataset_size: 80637157
- config_name: en-ro
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 80290819
num_examples: 48789
download_size: 80562562
dataset_size: 80290819
- config_name: en-sk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 31510348
num_examples: 23410
download_size: 31707728
dataset_size: 31510348
- config_name: en-uk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- uk
splits:
- name: train
num_bytes: 136808871
num_examples: 89226
download_size: 137391928
dataset_size: 136808871
- config_name: es-fr
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 53767035
num_examples: 32553
download_size: 53989438
dataset_size: 53767035
- config_name: fr-ru
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 33915203
num_examples: 10889
download_size: 33994490
dataset_size: 33915203
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 655742822
num_examples: 1167988
download_size: 204094654
dataset_size: 655742822
- config_name: en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 3100002828
num_examples: 6170339
download_size: 1093334863
dataset_size: 3100002828
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 337690858
num_examples: 649396
download_size: 105202237
dataset_size: 337690858
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 6103179552
num_examples: 12223525
download_size: 1846098331
dataset_size: 6103179552
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 1059631418
num_examples: 2165054
download_size: 339299130
dataset_size: 1059631418
- config_name: en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 1466703472
num_examples: 2324357
download_size: 475152089
dataset_size: 1466703472
- config_name: fr-ja
features:
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 211127021
num_examples: 313422
download_size: 69038401
dataset_size: 211127021
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 2297993338
num_examples: 4897841
download_size: 899568201
dataset_size: 2297993338
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 1974874480
num_examples: 4296399
download_size: 567240359
dataset_size: 1974874480
- config_name: fr-ko
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- ko
splits:
- name: train
num_bytes: 222006786
num_examples: 120607
download_size: 64621605
dataset_size: 222006786
- config_name: ru-uk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- ru
- uk
splits:
- name: train
num_bytes: 163442529
num_examples: 85963
download_size: 38709524
dataset_size: 163442529
- config_name: en-pt
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 37372555
num_examples: 23121
download_size: 12781082
dataset_size: 37372555
---
# Dataset Card for ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632)
- **Repository:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://github.com/soares-f/parapat)
- **Paper:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://www.aclweb.org/anthology/2020.lrec-1.465/)
- **Point of Contact:** [Felipe Soares](fs@felipesoares.net)
### Dataset Summary
ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
This dataset contains the developed parallel corpus from the open access Google Patents dataset in 74 language pairs, comprising more than 68 million sentences and 800 million tokens. Sentences were automatically aligned using the Hunalign algorithm for the largest 22 language pairs, while the others were abstract (i.e. paragraph) aligned.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in cs, de, el, en, es, fr, hu, ja, ko, pt, ro, ru, sk, uk, zh, hu
## Dataset Structure
### Data Instances
They are of 2 types depending on the dataset:
First type
{
"translation":{
"en":"A method for converting a series of m-bit information words to a modulated signal is described.",
"es":"Se describe un método para convertir una serie de palabras de informacion de bits m a una señal modulada."
}
}
Second type
{
"family_id":10944407,
"index":844,
"translation":{
"el":"αφές ο οποίος παρασκευάζεται με χαρμάνι ελληνικού καφέ είτε σε συσκευή καφέ εσπρέσο είτε σε συσκευή γαλλικού καφέ (φίλτρου) είτε κατά τον παραδοσιακό τρόπο του ελληνικού καφέ και διυλίζεται, κτυπιέται στη συνέχεια με πάγο σε χειροκίνητο ή ηλεκτρικόμίξερ ώστε να παγώσει ομοιόμορφα και να αποκτήσει πλούσιο αφρό και σερβίρεται σε ποτήρι. ΰ",
"en":"offee prepared using the mix for Greek coffee either in an espresso - type coffee making machine, or in a filter coffee making machine or in the traditional way for preparing Greek coffee and is then filtered , shaken with ice manually or with an electric mixer so that it freezes homogeneously, obtains a rich froth and is served in a glass."
}
}
### Data Fields
**index:** position in the corpus
**family id:** for each abstract, such that researchers can use that information for other text mining purposes.
**translation:** distionary containing source and target sentence for that example
### Data Splits
No official train/val/test splits given.
Parallel corpora aligned into sentence level
|Language Pair|# Sentences|# Unique Tokens|
|--------|-----|------|
|EN/ZH|4.9M|155.8M|
|EN/JA|6.1M|189.6M|
|EN/FR|12.2M|455M|
|EN/KO|2.3M|91.4M|
|EN/DE|2.2M|81.7M|
|EN/RU|4.3M|107.3M|
|DE/FR|1.2M|38.8M|
|FR/JA|0.3M|9.9M|
|EN/ES|0.6M|24.6M|
Parallel corpora aligned into abstract level
|Language Pair|# Abstracts|
|--------|-----|
|FR/KO|120,607|
|EN/UK|89,227|
|RU/UK|85,963|
|CS/EN|78,978|
|EN/RO|48,789|
|EN/HU|42,629|
|ES/FR|32,553|
|EN/SK|23,410|
|EN/PT|23,122|
|BG/EN|16,177|
|FR/RU|10,889|
## Dataset Creation
### Curation Rationale
The availability of parallel corpora is required by current Statistical and Neural Machine Translation systems (SMT and NMT). Acquiring a high-quality parallel corpus that is large enough to train MT systems, particularly NMT ones, is not a trivial task due to the need for correct alignment and, in many cases, human curation. In this context, the automated creation of parallel corpora from freely available resources is extremely important in Natural Language Pro- cessing (NLP).
### Source Data
#### Initial Data Collection and Normalization
Google makes patents data available under the Google Cloud Public Datasets. BigQuery is a Google service that supports the efficient storage and querying of massive datasets which are usually a challenging task for usual SQL databases. For instance, filtering the September 2019 release of the dataset, which contains more than 119 million rows, can take less than 1 minute for text fields. The on-demand billing for BigQuery is based on the amount of data processed by each query run, thus for a single query that performs a full-scan, the cost can be over USD 15.00, since the cost per TB is currently USD 5.00.
#### Who are the source language producers?
BigQuery is a Google service that supports the efficient storage and querying of massive datasets which are usually a challenging task for usual SQL databases.
### Annotations
#### Annotation process
The following steps describe the process of producing patent aligned abstracts:
1. Load the nth individual file
2. Remove rows where the number of abstracts with more than one language is less than 2 for a given family id. The family id attribute is used to group patents that refers to the same invention. By removing these rows, we remove abstracts that are available only in one language.
3. From the resulting set, create all possible parallel abstracts from the available languages. For instance, an abstract may be available in English, French and German, thus, the possible language pairs are English/French, English/German, and French/German.
4. Store the parallel patents into an SQL database for easier future handling and sampling.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Funded by Google Tensorflow Research Cloud.
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{soares-etal-2020-parapat,
title = "{P}ara{P}at: The Multi-Million Sentences Parallel Corpus of Patents Abstracts",
author = "Soares, Felipe and
Stevenson, Mark and
Bartolome, Diego and
Zaretskaya, Anna",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.465",
pages = "3769--3774",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
[DOI](https://doi.org/10.6084/m9.figshare.12627632)
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
reclor | 2022-11-18T21:41:37.000Z | [
"region:us"
] | null | Logical reasoning is an important ability to examine, analyze, and critically evaluate arguments as they occur in ordinary
language as the definition from LSAC. ReClor is a dataset extracted from logical reasoning questions of standardized graduate
admission examinations. Empirical results show that the state-of-the-art models struggle on ReClor with poor performance
indicating more research is needed to essentially enhance the logical reasoning ability of current models. We hope this
dataset could help push Machine Reading Comprehension (MRC) towards more complicated reasonin | @inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
} | null | 1 | 6 | ---
paperswithcode_id: reclor
pretty_name: ReClor
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: label
dtype: string
- name: id_string
dtype: string
splits:
- name: train
num_bytes: 4711114
num_examples: 4638
- name: test
num_bytes: 1017354
num_examples: 1000
- name: validation
num_bytes: 518604
num_examples: 500
download_size: 0
dataset_size: 6247072
---
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
swedish_reviews | 2023-01-25T14:45:25.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sv",
"license:unknown",
"region:us"
] | null | null | null | null | 2 | 6 | ---
annotations_creators:
- found
language_creators:
- found
language:
- sv
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Swedish Reviews
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
config_name: plain_text
splits:
- name: test
num_bytes: 6296541
num_examples: 20697
- name: validation
num_bytes: 6359227
num_examples: 20696
- name: train
num_bytes: 18842891
num_examples: 62089
download_size: 11841056
dataset_size: 31498659
---
# Dataset Card for Swedish Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [swedish_reviews homepage](https://github.com/timpal0l/swedish-sentiment)
- **Repository:** [swedish_reviews repository](https://github.com/timpal0l/swedish-sentiment)
- **Point of Contact:** [Tim Isbister](mailto:timisbisters@gmail.com)
### Dataset Summary
The dataset is scraped from various Swedish websites where reviews are present. The dataset consists of 103 482 samples split between `train`, `valid` and `test`. It is a sample of the full dataset, where this sample is balanced to the minority class (negative). The original data dump was heavly skewved to positive samples with a 95/5 ratio.
### Supported Tasks and Leaderboards
This dataset can be used to evaluate sentiment classification on Swedish.
### Languages
The text in the dataset is in Swedish.
## Dataset Structure
### Data Instances
What a sample looks like:
```
{
'text': 'Jag tycker huggingface är ett grymt project!',
'label': 1,
}
```
### Data Fields
- `text`: A text where the sentiment expression is present.
- `label`: a int representing the label `0`for negative and `1`for positive.
### Data Splits
The data is split into a training, validation and test set. The final split sizes are as follow:
| Train | Valid | Test |
| ------ | ----- | ---- |
| 62089 | 20696 | 20697 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Various Swedish websites with product reviews.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Swedish
### Annotations
[More Information Needed]
#### Annotation process
Automatically annotated based on user reviews on a scale 1-5, where 1-2 is considered `negative` and 4-5 is `positive`, 3 is skipped as it tends to be more neutral.
#### Who are the annotators?
The users who have been using the products.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
The corpus was scraped by @timpal0l
### Licensing Information
Research only.
### Citation Information
No paper exists currently.
### Contributions
Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset. |
telugu_news | 2023-01-25T14:45:35.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"annotations_creators:machine-generated",
"language_creato... | null | This dataset contains Telugu language news articles along with respective
topic labels (business, editorial, entertainment, nation, sport) extracted from
the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models. | @InProceedings{kaggle:dataset,
title = {Telugu News - Natural Language Processing for Indian Languages},
authors={Sudalai Rajkumar, Anusha Motamarri},
year={2019}
} | null | 0 | 6 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- te
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modeling
- multi-class-classification
- topic-classification
pretty_name: TeluguNews
dataset_info:
features:
- name: sno
dtype: int32
- name: date
dtype: string
- name: heading
dtype: string
- name: body
dtype: string
- name: topic
dtype:
class_label:
names:
'0': business
'1': editorial
'2': entertainment
'3': nation
'4': sports
splits:
- name: train
num_bytes: 69400234
num_examples: 17312
- name: test
num_bytes: 17265514
num_examples: 4329
download_size: 0
dataset_size: 86665748
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
- **Repository:** https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
### Dataset Summary
This dataset contains Telugu language news articles along with respective topic
labels (business, editorial, entertainment, nation, sport) extracted from the daily Andhra Jyoti.
This dataset could be used to build Classification and Language Models.
### Supported Tasks and Leaderboards
Multiclass classification, Topic Classification, Language Model
### Languages
TE - Telugu, India
## Dataset Structure
### Data Instances
Two CSV files (train, test) with five columns (sno, date, heading, body, topic).
### Data Fields
- sno: id
- date: publish date of the news article
- heading: article heading/title
- body: article body/content
- topic: one of the following topics (business, editorial, entertainment, nation, sport)
### Data Splits
Train and Test
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
- https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
- https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
#### Initial Data Collection and Normalization
The source data is scraped articles from archives of Telugu newspaper website Andhra Jyoti.
A set of queries were created and the corresponding ground truth answers were retrieved by a combination of BM25 and tf-idf.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Sudalai Rajkumar, Anusha Motamarri
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{kaggle:dataset,
title = {Telugu News - Natural Language Processing for Indian Languages},
authors={Sudalai Rajkumar, Anusha Motamarri},
year={2019}
}
```
### Contributions
Thanks to [@oostopitre](https://github.com/oostopitre) for adding this dataset. |
turkish_movie_sentiment | 2022-11-03T16:07:48.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:tr",
"license:unknown",
"region:us... | null | This data set is a dataset from kaggle consisting of Turkish movie reviews and scored between 0-5. | null | null | 3 | 6 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
paperswithcode_id: null
pretty_name: 'TurkishMovieSentiment: This dataset contains turkish movie reviews.'
dataset_info:
features:
- name: point
dtype: float32
- name: comment
dtype: string
- name: film_name
dtype: string
config_name: turkishmoviesentiment
splits:
- name: train
num_bytes: 33954560
num_examples: 83227
download_size: 0
dataset_size: 33954560
---
# Dataset Card for TurkishMovieSentiment: This dataset contains turkish movie reviews.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks)
- **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/)
### Dataset Summary
This data set is a dataset from kaggle consisting of Turkish movie reviews and scored between 0-5.
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
**Example 1:**
**Comment:** Jean Reno denince zaten leon filmi gelir akla izlemeyen kalmamıştır ama kaldıysada ee ne duruyorsun hemen izle :),
**Film_name:** Sevginin Gücü,
**Point:** 5,0
**Example 2:**
**Comment:** Bence güzel bi film olmush.İzlenmeli.İnsana şükretmek gerektini hatırlatıyor.Ama cok da poh pohlanacak bi sey yapmamıslar,
**Film_name:** Cinderella Man,
**Point:** 2,5
### Data Fields
- **comment**(string) : Contatins turkish movie review
- **film_name**(string) : Film name in Turkish.
- **point**(float) : [0-5] floating point
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Discussion of Social Impact and Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/).
### Licensing Information
The data is under the [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset. |
ASCCCCCCCC/amazon_zh_simple | 2022-02-22T01:37:48.000Z | [
"license:apache-2.0",
"region:us"
] | ASCCCCCCCC | null | null | null | 1 | 6 | ---
license: apache-2.0
---
|
Aisha/BAAD16 | 2022-10-22T05:31:54.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:origi... | Aisha | null | null | null | 0 | 6 | ---
annotations_creators:
- found
- crowdsourced
- expert-generated
language_creators:
- found
- crowdsourced
language:
- bn
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'BAAD16: Bangla Authorship Attribution Dataset (16 Authors)'
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
## Description
**BAAD16** is an **Authorship Attribution dataset for Bengali Literature**. It was collected and analyzed by the authors of [this paper](https://arxiv.org/abs/2001.05316). It was created by scraping text from an online Bangla e-library using custom web crawler and contains literary works of various famous Bangla writers. It contains novels, stories, series, and other works of 16 authors. Each sample document is created with 750 words. The dataset is imbalanced and resembles real-world scenarios more closely, where not all the authors will have a large number of sample texts. The following table gives more details about the dataset.
| Author Name | Number of Samples | Word Count | Unique Word
| --- | --- | --- | --- |
| zahir rayhan | 185 | 138k | 20k
|nazrul | 223 | 167k | 33k
|manik bandhopaddhay | 469 | 351k | 44k
|nihar ronjon gupta | 476 | 357k | 43k
|bongkim | 562 | 421k | 62k
|tarashonkor | 775 | 581k | 84k
|shottojit roy | 849 | 636k | 67k
|shordindu | 888 | 666k | 84k
|toslima nasrin | 931 | 698k | 76k
|shirshendu | 1048 | 786k | 69k
|zafar iqbal | 1100 | 825k | 53k
|robindronath | 1259 | 944k | 89k
|shorotchandra | 1312 | 984k | 78k
|shomresh | 1408 | 1056k|69k
|shunil gongopaddhay | 1963 | 1472k|109k
|humayun ahmed | 4518 | 3388k |161k
**Total**| 17,966|13,474,500 | 590,660
**Average**|1,122.875|842,156.25| 71,822.25
## Citation
If you use this dataset, please cite the paper [Authorship Attribution in Bangla literature using Character-level CNN](https://ieeexplore.ieee.org/abstract/document/9038560/). [Archive link](https://arxiv.org/abs/2001.05316).
```
@inproceedings{BAAD16Dataset,
title={Authorship Attribution in Bangla literature using Character-level CNN},
author={Khatun, Aisha and Rahman, Anisur and Islam, Md Saiful and others},
booktitle={2019 22nd International Conference on Computer and Information Technology (ICCIT)},
pages={1--5},
year={2019},
organization={IEEE}
doi={10.1109/ICCIT48885.2019.9038560}
}
```
This dataset is also available in Mendeley: [BAAD16 dataset](https://data.mendeley.com/datasets/6d9jrkgtvv/4). Always make sure to use the latest version of the dataset. Cite the dataset directly by:
```
@misc{BAAD6Dataset,
author = {Khatun, Aisha and Rahman, Anisur and Islam, Md. Saiful},
title = {BAAD16: Bangla Authorship Attribution Dataset},
year={2019},
doi = {10.17632/6d9jrkgtvv.4},
howpublished= {\url{https://data.mendeley.com/datasets/6d9jrkgtvv/4}}
}
``` |
BritishLibraryLabs/EThOS-PhD-metadata | 2022-07-23T21:14:57.000Z | [
"task_categories:text-classification",
"task_categories:fill-mask",
"task_ids:multi-label-classification",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"language:en",
"region:us"
] | BritishLibraryLabs | The data in this collection comprises the bibliographic metadata for all UK doctoral theses listed in EThOS, the UK's national thesis service.
We estimate the data covers around 98% of all PhDs ever awarded by UK Higher Education institutions, dating back to 1787.
Thesis metadata from every PhD-awarding university in the UK is included. | \
@misc{british library_genre,
title={UK Doctoral Thesis Metadata from EThOS},
url={UK Doctoral Thesis Metadata from EThOS},
author={{British Library} and {Rosie, Heather}},
year={2021}} | null | 1 | 6 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: EThOS PhD metadata
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
- fill-mask
task_ids:
- multi-label-classification
- masked-language-modeling
---
# Dataset Card for EThOS PhD metadata
## Table of Contents
- [Dataset Card for blbooksgenre](#dataset-card-for-EThOS PhD metadata)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Supervised tasks](#supervised-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**: https://bl.iro.bl.uk/concern/datasets/c815b271-09be-4123-8156-405094429198?locale=en
- **Repository:** https://doi.org/10.23636/ybpt-nh33
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data in this collection comprises the bibliographic metadata for all UK doctoral theses listed in EThOS, the UK's national thesis service. We estimate the data covers around 98% of all PhDs ever awarded by UK Higher Education institutions, dating back to 1787. Thesis metadata from every PhD-awarding university in the UK is included. You can investigate and re-use this unique collection of UK universities' PhD thesis data to analyse trends in postgraduate research, make connections between researchers, apply large data analysis, improve citation of theses and many more applications.
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
#### Supervised tasks
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
[More Information Needed]
### Data Instances
An example data instance:
```python
{'Abstract': ' ',
'Author': 'Loizou, Panos A.',
'Author ISNI': 'https://isni.org/isni/0000000136122593',
'DOI': ' ',
'Date': datetime.datetime(1989, 1, 1, 0, 0),
'EThOS URL': 'https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.232781',
'Funder(s)': ' ',
'IR URL': ' ',
'Institution': 'University of Manchester',
'Institution ISNI': 'https://isni.org/isni/0000000121662407',
'ORCID': ' ',
'Qualification': 'Thesis (Ph.D.)',
'Subject Discipline': 0,
'Supervisor(s)': ' ',
'Title': 'Computation and measurement of turbulent flow through idealized turbine blade passages'}
```
### Data Fields
[More Information Needed]
### Data Splits
This dataset contains a single split `train`.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The books are licensed under the [CC BY 4.0 Attribution](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
|
Fraser/short-jokes | 2021-02-24T08:31:31.000Z | [
"region:us"
] | Fraser | Copy of [Kaggle dataset](https://www.kaggle.com/abhinavmoudgil95/short-jokes), adding to Huggingface for ease of use.
Description from Kaggle:
Context
Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
Visit my Github repository for more information regarding collection of data and the scripts used.
Content
This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
Disclaimer
It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people. | null | null | 5 | 6 | Copy of [Kaggle dataset](https://www.kaggle.com/abhinavmoudgil95/short-jokes), adding to Huggingface for ease of use.
Description from Kaggle:
Context
Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
Visit my Github repository for more information regarding collection of data and the scripts used.
Content
This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
Disclaimer
It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
|
SetFit/amazon_polarity | 2022-01-19T20:49:58.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 6 | Entry not found |
Sunbird/salt-dataset | 2022-03-28T13:04:56.000Z | [
"region:us"
] | Sunbird | null | null | null | 3 | 6 | A parallel text corpus, **SALT -- (Sunbird African Language Translation Dataset)**, was created for five Ugandan languages (Luganda,
Runyankore, Acholi, Lugbara and Ateso) and various methods were explored to train and evaluate translation models. |
SuperAI2-Machima/ThaiQA_LST20 | 2022-02-25T06:29:22.000Z | [
"language:thai",
"language:th",
"license:mit",
"question-generation dataset",
"qa dataset",
"region:us"
] | SuperAI2-Machima | null | null | null | 0 | 6 | ---
tags:
- question-generation dataset
- qa dataset
language:
- thai
- th
datasets:
- LST20
license: mit
---
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
Machima_ThaiQA_LST20 เป็นชุดข้อมูลที่สกัดหาคำถาม และคำตอบ จากบทความในชุดข้อมูล LST20 โดยสกัดได้คำถาม-ตอบทั้งหมด 7,642 คำถาม มีข้อมูล 4 คอลัมน์ ประกอบด้วย context, question, answer และ status ตามลำดับ
แสดงตัวอย่างดังนี้
context : ด.ต.ประสิทธิ์ ชาหอมชื่นอายุ 55 ปี ผบ.หมู่งาน ป.ตชด. 24 อุดรธานีถูกยิงด้วยอาวุธปืนอาก้าเข้าที่แขนซ้าย 3 นัดหน้าท้อง 1 นัดส.ต.อ.ประเสริฐ ใหญ่สูงเนินอายุ 35 ปี ผบ.หมู่กก. 1 ปส.2 บช.ปส. ถูกยิงเข้าที่แขนขวากระดูกแตกละเอียดร.ต.อ.ชวพล หมื่นโรจน์อายุ 32 ปีรอง สว.กก. 1 ปส. 2 บช.ปส. ถูกยิงเข้าที่แก้มและไหปลาร้าด้านขวา
question :ผบ.หมู่งาน ป.ตชด. 24 อุดรธานี ถูกยิงด้วยอาวุธปืนอะไรเข้าที่แขนซ้าย 3 นัดหน้าท้อง
answer : อาวุธปืนอาก้า
status : 1
ซึ่งใน 7,642 คำถาม จะมีคำถาม-ตอบ ที่สกัดออกมาได้ถูกต้อง และไม่ถูกต้องตาม ยกตัวอย่างเช่น ตอบไม่ตรงคำถาม หรือมีคำตอบอยู่ด้านในประโยคคำถาม
ทางทีมงานบ้านมณิมาได้ทำการตรวจสอบคำถามตอบ และทำการติด label ให้กับคู่ของคำถาม-ตอบ ที่ถูกต้อง และไม่ถูกต้อง โดย 1 = ถูกต้อง และ 0 = ไม่ถูกต้อง
จากคู่คำถาม-ตอบ 7,642 คำถาม
พบว่าถูกต้อง 4,438 คำถาม
ไม่ถูกต้อง 3,204 คำถาม
เพื่อน ๆ สามารถโหลดข้อมูลมาใช้โดยใช้โค้ดดังนี้
```python
!pip install datasets -qq #สำหรับโหลดdataset
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("SuperAI2-Machima/ThaiQA_LST20")
train_df = pd.DataFrame(dataset['train'])
train_df
``` |
bhigy/buckeye_asr | 2022-10-24T15:32:04.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | bhigy | The Buckeye Corpus of conversational speech contains high-quality recordings
from 40 speakers in Columbus OH conversing freely with an interviewer. The
speech has been orthographically transcribed and phonetically labeled. | @misc{pitt2007Buckeye,
title = {Buckeye {Corpus} of {Conversational} {Speech} (2nd release).},
url = {www.buckeyecorpus.osu.edu},
publisher = {Columbus, OH: Department of Psychology, Ohio State University (Distributor)},
author = {Pitt, M.A. and Dilley, L. and Johnson, K. and Kiesling, S. and Raymond, W. and Hume, E. and Fosler-Lussier, E.},
year = {2007},
} | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
language_bcp47:
- en-US
license:
- other
multilinguality:
- monolingual
pretty_name: Buckeye Corpus
size_categories:
- unknown
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
---
# Dataset Card for the Buckeye Corpus (buckeye_asr)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://buckeyecorpus.osu.edu/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The Buckeye Corpus of conversational speech contains high-quality recordings from 40 speakers in Columbus OH conversing freely with an interviewer. The speech has been orthographically transcribed and phonetically labeled.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
American English (en-US)
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `file`: filename of the audio file containing the utterance.
- `audio`: filename of the audio file containing the utterance.
- `text`: transcription of the utterance.
- `phonetic_detail`: list of phonetic annotations for the utterance (start, stop and label of each phone).
- `word_detail`: list of word annotations for the utterance (start, stop, label, broad and narrow transcriptions, syntactic class).
- `speaker_id`: string identifying the speaker.
- `id`: string identifying the utterance.
### Data Splits
The data is split in training, validation and test sets with different speakers (32, 4, and 4 speakers respectively) in each set. The sets are all balanced for speaker's gender and age.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
FREE for noncommercial uses.
### Citation Information
```
@misc{pitt2007Buckeye,
title = {Buckeye {Corpus} of {Conversational} {Speech} (2nd release).},
url = {www.buckeyecorpus.osu.edu},
publisher = {Columbus, OH: Department of Psychology, Ohio State University (Distributor)},
author = {Pitt, M.A. and Dilley, L. and Johnson, K. and Kiesling, S. and Raymond, W. and Hume, E. and Fosler-Lussier, E.},
year = {2007},
}
```
### Usage
The first step is to download a copy of the dataset from [the official website](https://buckeyecorpus.osu.edu). Once done, the dataset can be loaded directly through the `datasets` library by running:
```
from datasets import load_dataset
dataset = load_dataset("bhigy/buckeye_asr", data_dir=<path_to_the_dataset>)
```
where `<path_to_the_dataset>` points to the folder where the dataset is stored. An example of path to one of the audio files is then `<path_to_the_dataset>/s01/s0101a.wav`. |
cointegrated/ru-paraphrase-NMT-Leipzig | 2022-10-23T12:23:15.000Z | [
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:ru",
"license:cc-by-4.0",
"conditional-text-generation",
"paraphrase-generation",
... | cointegrated | null | null | null | 4 | 6 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- ru
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
task_categories:
- text-generation
pretty_name: ru-paraphrase-NMT-Leipzig
tags:
- conditional-text-generation
- paraphrase-generation
- paraphrase
---
# Dataset Card for **cointegrated/ru-paraphrase-NMT-Leipzig**
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** https://habr.com/ru/post/564916/
- **Point of Contact:** [@cointegrated](https://huggingface.co/cointegrated)
### Dataset Summary
The dataset contains 1 million Russian sentences and their automatically generated paraphrases.
It was created by David Dale ([@cointegrated](https://huggingface.co/cointegrated)) by translating the `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) into English and back into Russian. A fraction of the resulting paraphrases are invalid, and should be filtered out.
The blogpost ["Перефразирование русских текстов: корпуса, модели, метрики"](https://habr.com/ru/post/564916/) provides a detailed description of the dataset and its properties.
The dataset can be loaded with the following code:
```Python
import datasets
data = datasets.load_dataset(
'cointegrated/ru-paraphrase-NMT-Leipzig',
data_files={"train": "train.csv","val": "val.csv","test": "test.csv"},
)
```
Its output should look like
```
DatasetDict({
train: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 980000
})
val: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 10000
})
test: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 10000
})
})
```
### Supported Tasks and Leaderboards
The dataset can be used to train and validate models for paraphrase generation or (if negative sampling is used) for paraphrase detection.
### Languages
Russian (main), English (auxilliary).
## Dataset Structure
### Data Instances
Data instances look like
```
{
"labse_sim": 0.93502015,
"chrf_sim": 0.4946451012684782,
"idx": 646422,
"ru": "О перспективах развития новых медиа-технологий в РФ расскажут на медиафоруме Енисея.",
"original": "Перспективы развития новых медиатехнологий в Российской Федерации обсудят участники медиафорума «Енисей.",
"en": "Prospects for the development of new media technologies in the Russian Federation will be discussed at the Yenisey Media Forum."
}
```
Where `original` is the original sentence, and `ru` is its machine-generated paraphrase.
### Data Fields
- `idx`: id of the instance in the original corpus
- `original`: the original sentence
- `en`: automatic translation of `original` to English
- `ru`: automatic translation of `en` back to Russian, i.e. a paraphrase of `original`
- `chrf_sim`: [ChrF++](https://huggingface.co/metrics/chrf) similarity of `original` and `ru`
- `labse_sim`: cosine similarity of [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embedings of `original` and `ru`
- `forward_entailment`: predicted probability that `original` entails `ru`
- `backward_entailment`: predicted probability that `ru` entails `original`
- `p_good`: predicted probability that `ru` and `original` have equivalent meaning
### Data Splits
Train – 980K, validation – 10K, test – 10K. The splits were generated randomly.
## Dataset Creation
### Curation Rationale
There are other Russian paraphrase corpora, but they have major drawbacks:
- The best known [corpus from paraphraser.ru 2016 contest](http://paraphraser.ru/download/) is rather small and covers only the News domain.
- [Opusparcus](https://huggingface.co/datasets/GEM/opusparcus), [ParaPhraserPlus](http://paraphraser.ru/download/), and [corpora of Tamara Zhordanija](https://github.com/tamriq/paraphrase) are noisy, i.e. a large proportion of sentence pairs in them have substantial difference in meaning.
- The Russian part of [TaPaCo](https://huggingface.co/datasets/tapaco) has very high lexical overlap in the sentence pairs; in other words, their paraphrases are not diverse enough.
The current corpus is generated with a dual objective: the parphrases should be semantically as close as possible to the original sentences, while being lexically different from them. Back-translation with restricted vocabulary seems to achieve this goal often enough.
### Source Data
#### Initial Data Collection and Normalization
The `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) as is.
The process of its creation is described [in this paper](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf):
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: *Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012*.
#### Automatic paraphrasing
The paraphrasing was carried out by translating the original sentence to English and then back to Russian.
The models [facebook/wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) and [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) were used for translation.
To ensure that the back-translated texts are not identical to the original texts, the final decoder was prohibited to use the token n-grams from the original texts.
The code below implements the paraphrasing function.
```python
import torch
from transformers import FSMTModel, FSMTTokenizer, FSMTForConditionalGeneration
tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-en-ru")
model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-en-ru")
inverse_tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-ru-en")
inverse_model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-ru-en")
model.cuda();
inverse_model.cuda();
def paraphrase(text, gram=4, num_beams=5, **kwargs):
""" Generate a paraphrase using back translation.
Parameter `gram` denotes size of token n-grams of the original sentence that cannot appear in the paraphrase.
"""
input_ids = inverse_tokenizer.encode(text, return_tensors="pt")
with torch.no_grad():
outputs = inverse_model.generate(input_ids.to(inverse_model.device), num_beams=num_beams, **kwargs)
other_lang = inverse_tokenizer.decode(outputs[0], skip_special_tokens=True)
# print(other_lang)
input_ids = input_ids[0, :-1].tolist()
bad_word_ids = [input_ids[i:(i+gram)] for i in range(len(input_ids)-gram)]
input_ids = tokenizer.encode(other_lang, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(input_ids.to(model.device), num_beams=num_beams, bad_words_ids=bad_word_ids, **kwargs)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
return decoded
```
The corpus was created by running the above `paraphrase` function on the original sentences with parameters `gram=3, num_beams=5, repetition_penalty=3.14, no_repeat_ngram_size=6`.
### Annotations
#### Annotation process
The dataset was annotated by several automatic metrics:
- [ChrF++](https://huggingface.co/metrics/chrf) between `original` and `ru` sentences;
- cosine similarity between [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embeddings of these sentences;
- forward and backward entailment probabilites predictd by the [rubert-base-cased-nli-twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) model;
- `p_good`, a metric aggregating the four metrics above into a single number. It is obtained with a logistic regression trained on 100 randomly chosen from the train set and manually labelled sentence pairs.
#### Who are the annotators?
Human annotation was involved only for a small subset used to train the model for `p_good`. It was conduced by the dataset author, @cointegrated.
### Personal and Sensitive Information
The dataset is not known to contain any personal or sensitive information.
The sources and processes of original data collection are described at https://wortschatz.uni-leipzig.de/en/download.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset may enable creation for paraphrasing systems that can be used both for "good" purposes (such as assisting writers or augmenting text datasets), and for "bad" purposes (such as disguising plagiarism). The authors are not responsible for any uses of the dataset.
### Discussion of Biases
The dataset may inherit some of the biases of [the underlying Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download) or the neural machine translation models ([1](https://huggingface.co/facebook/wmt19-ru-en), [2](https://huggingface.co/facebook/wmt19-en-ru)) with which it was generated.
### Other Known Limitations
Most of the paraphrases in the dataset are valid (by a rough estimante, at least 80%). However, in some sentence pairs there are faults:
- Named entities are often spelled in different ways (e.g. `"Джейкоб" -> "Яков") or even replaced with other entities (e.g. `"Оймякон" -> "Оймянск" or `"Верхоянск" -> "Тольятти"`).
- Sometimes the meaning of words or phrases changes signigicantly, e.g. `"полустанок" -> "полумашина"`, or `"были по колено в грязи" -> "лежали на коленях в иле"`.
- Sometimes the syntax is changed in a meaning-altering way, e.g. `"Интеллектуальное преимущество Вавилова и его соратников над демагогами из рядов сторонников новой агробиологии разительно очевидно." -> "Интеллектуал Вавилов и его приспешники в новой аграрной биологии явно превзошли демогогов."`.
- Grammatical properties that are present in Russian morphology but absent in English, such as gender, are often lost, e.g. `"Я не хотела тебя пугать" -> "Я не хотел пугать вас"`.
The field `labse_sim` reflects semantic similarity between the sentences, and it can be used to filter out at least some poor paraphrases.
## Additional Information
### Dataset Curators
The dataset was created by [David Dale](https://daviddale.ru/en), a.k.a. [@cointegrated](https://huggingface.co/cointegrated).
### Licensing Information
This corpus, as well as the original Leipzig corpora, are licensed under [CC BY](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
[This blog post](https://habr.com/ru/post/564916/) can be cited:
```
@misc{dale_paraphrasing_2021,
author = "Dale, David",
title = "Перефразирование русских текстов: корпуса, модели, метрики",
editor = "habr.com",
url = "https://habr.com/ru/post/564916/",
month = {June},
year = {2021},
note = {[Online; posted 28-June-2021]},
}
```
### Contributions
Thanks to [@avidale](https://github.com/avidale) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.