id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
seungheondoh/LP-MusicCaps-MC | 2023-08-01T03:52:24.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"text-to-music",
"music-to-text",
"art",
"arxiv:2307.16372",
"region:us"
] | seungheondoh | null | null | null | 4 | 391 | ---
license: mit
language:
- en
tags:
- music
- text-to-music
- music-to-text
- art
pretty_name: LP-MusicCaps-MC
size_categories:
- 1K<n<10K
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MC
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- [LP-MusicCaps MSD](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD): 0.5M Audio with 2.2M Caption
- [LP-MusicCaps MTT](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MTT): 22k Audio with 88k Caption
- **LP-MusicCaps MC (This Repo)**: 5521 Audio with 22084 Caption. We utilize 13,219 unique aspects used by 10 musicians in the [MusicCaps dataset](https://huggingface.co/datasets/google/MusicCaps) to perform tag-to-caption generation through LLM.
## Data Instances
Each instance in LP-MusicCaps MC (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'fname': '[-0Gj8-vB1q4]-[30-40]',
'ytid': '-0Gj8-vB1q4',
'aspect_list': ['low quality',
'sustained strings melody',
'soft female vocal',
'mellow piano melody',
'sad',
'soulful',
'ballad'
],
'caption_ground_truth': 'The low quality recording features a ballad song that contains sustained strings, mellow piano melody and soft female vocal singing over it. It sounds sad and soulful, like something you would hear at Sunday services.',
'caption_writing': 'This heartfelt ballad showcases a soulful and sad low-quality sustained strings melody intertwined with a mellow piano melody, and a soft female vocal, resulting in an emotionally charged and sonically rich experience for listeners.',
'caption_summary': 'A melancholic and soulful ballad with low-quality sustained strings, a mellow piano melody, and soft female vocals.',
'caption_paraphrase': 'A melancholic ballad of soulful sadness featuring a low quality sustained strings melody complemented by a soft, mellow piano melody accompanied by a plaintive, soothing female vocal.',
'caption_attribute_prediction': 'This soulful ballad features a sustained strings melody that tugs at your heartstrings, accompanied by a mellow piano melody and gentle percussion. The soft, emotionally-charged female vocal delivers poetic and poignant lyrics that speak to the sadness and pain of lost love. The addition of a beautiful string arrangement adds to the melodic depth of the song, making it a truly moving listening experience. With its slow tempo, this track exudes a mellow and introspective vibe, perfect for those moments when you need a moment to sit and reflect on the past.',
'pseudo_attribute': ['emotional lyrics',
'slow tempo',
'gentle percussion',
'string arrangement'
],
'is_crawled': True,
'author_id': 4,
'start_s': 30,
'end_s': 40,
'audioset_positive_labels': '/m/0140xf,/m/02cjck,/m/04rlf',
'is_balanced_subset': False,
'is_audioset_eval': True
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|---------------------------------------------------------------------|
| fname | string | File name of the data |
| ytid | string | YouTube ID of the data |
| aspect_list | list of strings | List of unique aspects used by musicians in the MusicCaps dataset |
| caption_ground_truth | string | Ground truth caption for the data |
| caption_writing | string | Pseudo Caption generated through a writing instruction |
| caption_summary | string | Pseudo Caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo Caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo Caption generated through a attribute_prediction instruction |
| pseudo_attribute | list of strings | List of pseudo-attributes using in caption_attribute_prediction |
| is_crawled | boolean | Indicates whether the data is crawled or not |
| author_id | int64 | ID of the author |
| start_s | int64 | Start time in seconds |
| end_s | int64 | End time in seconds |
| audioset_positive_labels | string | Positive labels from the AudioSet dataset |
| is_balanced_subset | boolean | Indicates whether the data is part of a balanced subset |
| is_audioset_eval | boolean | Indicates whether the data is for AudioSet evaluation |
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. |
onestop_english | 2023-01-25T14:42:09.000Z | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:text-simplification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | null | This dataset is a compilation of the OneStopEnglish corpus of texts written at three reading levels into one file.
Text documents are classified into three reading levels - ele, int, adv (Elementary, Intermediate and Advance).
This dataset demonstrates its usefulness for through two applica-tions - automatic readability assessment and automatic text simplification.
The corpus consists of 189 texts, each in three versions/reading levels (567 in total). | @inproceedings{vajjala-lucic-2018-onestopenglish,
title = {OneStopEnglish corpus: A new corpus for automatic readability assessment and text simplification},
author = {Sowmya Vajjala and Ivana Lučić},
booktitle = {Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications},
year = {2018}
} | null | 15 | 389 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
- text-classification
task_ids:
- multi-class-classification
- text-simplification
paperswithcode_id: onestopenglish
pretty_name: OneStopEnglish corpus
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ele
'1': int
'2': adv
splits:
- name: train
num_bytes: 2278043
num_examples: 567
download_size: 1228804
dataset_size: 2278043
---
# Dataset Card for OneStopEnglish corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nishkalavallabhi/OneStopEnglishCorpus
- **Repository:** https://github.com/purvimisal/OneStopCorpus-Compiled/raw/main/Texts-SeparatedByReadingLevel.zip
- **Paper:** https://www.aclweb.org/anthology/W18-0535.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
An instance example:
```
{
"text": "When you see the word Amazon, what’s the first thing you think...",
"label": 0
}
```
Note that each instance contains the full text of the document.
### Data Fields
- `text`: Full document text.
- `label`: Reading level of the document- ele/int/adv (Elementary/Intermediate/Advance).
### Data Splits
The OneStopEnglish dataset has a single _train_ split.
| Split | Number of instances |
|-------|--------------------:|
| train | 567 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International License
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset. |
roszcz/ecg-segmentation-ltafdb | 2023-08-10T11:33:12.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 389 | ---
dataset_info:
features:
- name: record_id
dtype: string
- name: signal
dtype:
array2_d:
shape:
- 2
- 1000
dtype: float32
- name: mask
dtype:
array2_d:
shape:
- 1
- 1000
dtype: int8
splits:
- name: train
num_bytes: 6591714200
num_examples: 730278
- name: validation
num_bytes: 755744025
num_examples: 83724
- name: test
num_bytes: 807009592
num_examples: 89407
download_size: 2229542434
dataset_size: 8154467817
---
# Dataset Card for "ecg-segmentation-ltafdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SetFit/mnli | 2022-02-28T13:53:53.000Z | [
"region:us"
] | SetFit | null | null | null | 2 | 387 | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the matched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
tomekkorbak/detoxify-pile-chunk3-0-50000 | 2022-10-06T02:57:39.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 387 | Entry not found |
TigerResearch/pretrain_zh | 2023-06-14T13:50:32.000Z | [
"region:us"
] | TigerResearch | null | null | null | 80 | 387 | ---
dataset_info:
features:
- name: dataType
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: uniqueKey
dtype: string
- name: titleUkey
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 58043923125
num_examples: 16905023
download_size: 25662051889
dataset_size: 58043923125
---
# Dataset Card for "pretrain_zh"
[Tigerbot](https://github.com/TigerResearch/TigerBot) pretrain数据的中文部分。
包含(未压缩前) 中文书籍zh-books 12G, 中文互联网zh-webtext 25G, 中文百科zh-wiki 19G
更多语料请关注开源模型及持续更新 [https://github.com/TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot)
<p align="center" width="40%">
</p>
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_zh')
``` |
heliosbrahma/mental_health_chatbot_dataset | 2023-08-03T04:12:40.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"medical",
"region:us"
] | heliosbrahma | null | null | null | 17 | 387 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_examples: 172
license: mit
task_categories:
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: Mental Health Chatbot Dataset
size_categories:
- n<1K
---
# Dataset Card for "heliosbrahma/mental_health_chatbot_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. |
reciprocate/gsm8k-test_critiques | 2023-09-15T08:08:52.000Z | [
"region:us"
] | reciprocate | null | null | null | 1 | 387 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: critique
dtype: string
- name: revision
dtype: string
- name: revision_score
dtype: int64
- name: truth
dtype: float64
splits:
- name: train
num_bytes: 850387
num_examples: 753
download_size: 431338
dataset_size: 850387
---
# Dataset Card for "gsm8k-test_critiques"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juletxara/pawsx_mt | 2023-07-21T10:18:49.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-paws",
"language:en",
"license:other",
"paraphrase-identification",
"arxiv:1908.11828",
"region:us"
] | juletxara | PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages.
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine
translated training pairs in six typologically distinct languages: French, Spanish, German,
Chinese, Japanese, and Korean. English language is available by default. All translated
pairs are sourced from examples in PAWS-Wiki.
For further details, see the accompanying paper: PAWS-X: A Cross-lingual Adversarial Dataset
for Paraphrase Identification (https://arxiv.org/abs/1908.11828)
NOTE: There might be some missing or wrong labels in the dataset and we have replaced them with -1. | @InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
} | null | 0 | 386 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
language:
- en
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-paws
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
paperswithcode_id: paws-x
pretty_name: 'PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification'
tags:
- paraphrase-identification
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 470424
num_examples: 2000
- name: es
num_bytes: 477895
num_examples: 2000
- name: fr
num_bytes: 478044
num_examples: 2000
- name: ja
num_bytes: 461718
num_examples: 2000
- name: ko
num_bytes: 467649
num_examples: 2000
- name: zh
num_bytes: 481919
num_examples: 2000
download_size: 2704143
dataset_size: 2837649
- config_name: nllb-200-distilled-1.3B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 469810
num_examples: 2000
- name: es
num_bytes: 477848
num_examples: 2000
- name: fr
num_bytes: 476036
num_examples: 2000
- name: ja
num_bytes: 465219
num_examples: 2000
- name: ko
num_bytes: 469779
num_examples: 2000
- name: zh
num_bytes: 481685
num_examples: 2000
download_size: 2706871
dataset_size: 2840377
- config_name: nllb-200-1.3B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 472562
num_examples: 2000
- name: es
num_bytes: 480329
num_examples: 2000
- name: fr
num_bytes: 479096
num_examples: 2000
- name: ja
num_bytes: 465418
num_examples: 2000
- name: ko
num_bytes: 468672
num_examples: 2000
- name: zh
num_bytes: 480250
num_examples: 2000
download_size: 2712821
dataset_size: 2846327
- config_name: nllb-200-3.3B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 475185
num_examples: 2000
- name: es
num_bytes: 482022
num_examples: 2000
- name: fr
num_bytes: 480477
num_examples: 2000
- name: ja
num_bytes: 468442
num_examples: 2000
- name: ko
num_bytes: 475577
num_examples: 2000
- name: zh
num_bytes: 483772
num_examples: 2000
download_size: 2731969
dataset_size: 2865475
- config_name: xglm-564M
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 405887
num_examples: 2000
- name: es
num_bytes: 433475
num_examples: 2000
- name: fr
num_bytes: 451810
num_examples: 2000
- name: ja
num_bytes: 480321
num_examples: 2000
- name: ko
num_bytes: 430501
num_examples: 2000
- name: zh
num_bytes: 536783
num_examples: 2000
download_size: 2605271
dataset_size: 2738777
- config_name: xglm-1.7B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 448117
num_examples: 2000
- name: es
num_bytes: 470068
num_examples: 2000
- name: fr
num_bytes: 478245
num_examples: 2000
- name: ja
num_bytes: 462409
num_examples: 2000
- name: ko
num_bytes: 410803
num_examples: 2000
- name: zh
num_bytes: 455754
num_examples: 2000
download_size: 2591890
dataset_size: 2725396
- config_name: xglm-2.9B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 450076
num_examples: 2000
- name: es
num_bytes: 471853
num_examples: 2000
- name: fr
num_bytes: 475575
num_examples: 2000
- name: ja
num_bytes: 435278
num_examples: 2000
- name: ko
num_bytes: 407905
num_examples: 2000
- name: zh
num_bytes: 437874
num_examples: 2000
download_size: 2545055
dataset_size: 2678561
- config_name: xglm-4.5B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 466986
num_examples: 2000
- name: es
num_bytes: 483691
num_examples: 2000
- name: fr
num_bytes: 485910
num_examples: 2000
- name: ja
num_bytes: 485014
num_examples: 2000
- name: ko
num_bytes: 459562
num_examples: 2000
- name: zh
num_bytes: 502672
num_examples: 2000
download_size: 2750329
dataset_size: 2883835
- config_name: xglm-7.5B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 457033
num_examples: 2000
- name: es
num_bytes: 471085
num_examples: 2000
- name: fr
num_bytes: 474534
num_examples: 2000
- name: ja
num_bytes: 455080
num_examples: 2000
- name: ko
num_bytes: 432714
num_examples: 2000
- name: zh
num_bytes: 462024
num_examples: 2000
download_size: 2618964
dataset_size: 2752470
- config_name: bloom-560m
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 422431
num_examples: 2000
- name: es
num_bytes: 407925
num_examples: 2000
- name: fr
num_bytes: 417238
num_examples: 2000
- name: ja
num_bytes: 541097
num_examples: 2000
- name: ko
num_bytes: 305526
num_examples: 2000
- name: zh
num_bytes: 467990
num_examples: 2000
download_size: 2428701
dataset_size: 2562207
- config_name: bloom-1b1
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 420950
num_examples: 2000
- name: es
num_bytes: 440695
num_examples: 2000
- name: fr
num_bytes: 444933
num_examples: 2000
- name: ja
num_bytes: 383160
num_examples: 2000
- name: ko
num_bytes: 309106
num_examples: 2000
- name: zh
num_bytes: 427093
num_examples: 2000
download_size: 2292431
dataset_size: 2425937
- config_name: bloom-1b7
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 441068
num_examples: 2000
- name: es
num_bytes: 455189
num_examples: 2000
- name: fr
num_bytes: 458970
num_examples: 2000
- name: ja
num_bytes: 471554
num_examples: 2000
- name: ko
num_bytes: 387729
num_examples: 2000
- name: zh
num_bytes: 434684
num_examples: 2000
download_size: 2515688
dataset_size: 2649194
- config_name: bloom-3b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 452342
num_examples: 2000
- name: es
num_bytes: 468924
num_examples: 2000
- name: fr
num_bytes: 469477
num_examples: 2000
- name: ja
num_bytes: 450059
num_examples: 2000
- name: ko
num_bytes: 371349
num_examples: 2000
- name: zh
num_bytes: 443763
num_examples: 2000
download_size: 2522408
dataset_size: 2655914
- config_name: bloom-7b1
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 460868
num_examples: 2000
- name: es
num_bytes: 476090
num_examples: 2000
- name: fr
num_bytes: 477681
num_examples: 2000
- name: ja
num_bytes: 462541
num_examples: 2000
- name: ko
num_bytes: 410996
num_examples: 2000
- name: zh
num_bytes: 452755
num_examples: 2000
download_size: 2607425
dataset_size: 2740931
- config_name: llama-7B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 467040
num_examples: 2000
- name: es
num_bytes: 479857
num_examples: 2000
- name: fr
num_bytes: 481692
num_examples: 2000
- name: ja
num_bytes: 469209
num_examples: 2000
- name: ko
num_bytes: 460027
num_examples: 2000
- name: zh
num_bytes: 492611
num_examples: 2000
download_size: 2716930
dataset_size: 2850436
- config_name: llama-13B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 464622
num_examples: 2000
- name: es
num_bytes: 475395
num_examples: 2000
- name: fr
num_bytes: 475380
num_examples: 2000
- name: ja
num_bytes: 455735
num_examples: 2000
- name: ko
num_bytes: 446006
num_examples: 2000
- name: zh
num_bytes: 477833
num_examples: 2000
download_size: 2661465
dataset_size: 2794971
- config_name: llama-30B
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 471142
num_examples: 2000
- name: es
num_bytes: 480239
num_examples: 2000
- name: fr
num_bytes: 480078
num_examples: 2000
- name: ja
num_bytes: 473976
num_examples: 2000
- name: ko
num_bytes: 468087
num_examples: 2000
- name: zh
num_bytes: 498795
num_examples: 2000
download_size: 2738811
dataset_size: 2872317
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 454468
num_examples: 2000
- name: es
num_bytes: 474260
num_examples: 2000
- name: fr
num_bytes: 477493
num_examples: 2000
- name: ja
num_bytes: 463806
num_examples: 2000
- name: ko
num_bytes: 455166
num_examples: 2000
- name: zh
num_bytes: 520240
num_examples: 2000
download_size: 2711927
dataset_size: 2845433
- config_name: RedPajama-INCITE-7B-Base
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 467209
num_examples: 2000
- name: es
num_bytes: 482675
num_examples: 2000
- name: fr
num_bytes: 479674
num_examples: 2000
- name: ja
num_bytes: 469695
num_examples: 2000
- name: ko
num_bytes: 427807
num_examples: 2000
- name: zh
num_bytes: 475045
num_examples: 2000
download_size: 2668599
dataset_size: 2802105
- config_name: open_llama_3b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 459906
num_examples: 2000
- name: es
num_bytes: 474097
num_examples: 2000
- name: fr
num_bytes: 477589
num_examples: 2000
- name: ja
num_bytes: 462664
num_examples: 2000
- name: ko
num_bytes: 434739
num_examples: 2000
- name: zh
num_bytes: 490475
num_examples: 2000
download_size: 2665964
dataset_size: 2799470
- config_name: open_llama_7b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 464258
num_examples: 2000
- name: es
num_bytes: 476895
num_examples: 2000
- name: fr
num_bytes: 475470
num_examples: 2000
- name: ja
num_bytes: 467530
num_examples: 2000
- name: ko
num_bytes: 420696
num_examples: 2000
- name: zh
num_bytes: 471007
num_examples: 2000
download_size: 2642350
dataset_size: 2775856
- config_name: open_llama_13b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 466772
num_examples: 2000
- name: es
num_bytes: 480354
num_examples: 2000
- name: fr
num_bytes: 480221
num_examples: 2000
- name: ja
num_bytes: 460154
num_examples: 2000
- name: ko
num_bytes: 443434
num_examples: 2000
- name: zh
num_bytes: 467898
num_examples: 2000
download_size: 2665327
dataset_size: 2798833
- config_name: xgen-7b-4k-base
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 466109
num_examples: 2000
- name: es
num_bytes: 480599
num_examples: 2000
- name: fr
num_bytes: 481774
num_examples: 2000
- name: ja
num_bytes: 455601
num_examples: 2000
- name: ko
num_bytes: 441720
num_examples: 2000
- name: zh
num_bytes: 473661
num_examples: 2000
download_size: 2665958
dataset_size: 2799464
- config_name: xgen-7b-8k-base
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 464831
num_examples: 2000
- name: es
num_bytes: 478903
num_examples: 2000
- name: fr
num_bytes: 481199
num_examples: 2000
- name: ja
num_bytes: 458928
num_examples: 2000
- name: ko
num_bytes: 448148
num_examples: 2000
- name: zh
num_bytes: 475878
num_examples: 2000
download_size: 2674381
dataset_size: 2807887
- config_name: xgen-7b-8k-inst
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 472749
num_examples: 2000
- name: es
num_bytes: 483956
num_examples: 2000
- name: fr
num_bytes: 487250
num_examples: 2000
- name: ja
num_bytes: 485563
num_examples: 2000
- name: ko
num_bytes: 476502
num_examples: 2000
- name: zh
num_bytes: 507723
num_examples: 2000
download_size: 2780237
dataset_size: 2913743
- config_name: open_llama_7b_v2
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 464268
num_examples: 2000
- name: es
num_bytes: 476576
num_examples: 2000
- name: fr
num_bytes: 478153
num_examples: 2000
- name: ja
num_bytes: 460932
num_examples: 2000
- name: ko
num_bytes: 456955
num_examples: 2000
- name: zh
num_bytes: 467587
num_examples: 2000
download_size: 2670965
dataset_size: 2804471
- config_name: falcon-7b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 456304
num_examples: 2000
- name: es
num_bytes: 474821
num_examples: 2000
- name: fr
num_bytes: 448537
num_examples: 2000
- name: ja
num_bytes: 373442
num_examples: 2000
- name: ko
num_bytes: 425657
num_examples: 2000
- name: zh
num_bytes: 449866
num_examples: 2000
download_size: 2495121
dataset_size: 2628627
- config_name: polylm-1.7b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 459992
num_examples: 2000
- name: es
num_bytes: 466048
num_examples: 2000
- name: fr
num_bytes: 470826
num_examples: 2000
- name: ja
num_bytes: 448180
num_examples: 2000
- name: ko
num_bytes: 415816
num_examples: 2000
- name: zh
num_bytes: 438679
num_examples: 2000
download_size: 2566035
dataset_size: 2699541
- config_name: polylm-13b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 473536
num_examples: 2000
- name: es
num_bytes: 482328
num_examples: 2000
- name: fr
num_bytes: 481341
num_examples: 2000
- name: ja
num_bytes: 452146
num_examples: 2000
- name: ko
num_bytes: 457546
num_examples: 2000
- name: zh
num_bytes: 464947
num_examples: 2000
download_size: 2678338
dataset_size: 2811844
- config_name: polylm-multialpaca-13b
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 472264
num_examples: 2000
- name: es
num_bytes: 477291
num_examples: 2000
- name: fr
num_bytes: 474987
num_examples: 2000
- name: ja
num_bytes: 465751
num_examples: 2000
- name: ko
num_bytes: 465889
num_examples: 2000
- name: zh
num_bytes: 461985
num_examples: 2000
download_size: 2684661
dataset_size: 2818167
- config_name: open_llama_3b_v2
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 454405
num_examples: 2000
- name: es
num_bytes: 475689
num_examples: 2000
- name: fr
num_bytes: 476410
num_examples: 2000
- name: ja
num_bytes: 447704
num_examples: 2000
- name: ko
num_bytes: 435675
num_examples: 2000
- name: zh
num_bytes: 466981
num_examples: 2000
download_size: 2623358
dataset_size: 2756864
- config_name: Llama-2-7b-hf
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 468952
num_examples: 2000
- name: es
num_bytes: 481463
num_examples: 2000
- name: fr
num_bytes: 481620
num_examples: 2000
- name: ja
num_bytes: 452968
num_examples: 2000
- name: ko
num_bytes: 448819
num_examples: 2000
- name: zh
num_bytes: 476890
num_examples: 2000
download_size: 2677206
dataset_size: 2810712
- config_name: Llama-2-13b-hf
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 471040
num_examples: 2000
- name: es
num_bytes: 480439
num_examples: 2000
- name: fr
num_bytes: 479753
num_examples: 2000
- name: ja
num_bytes: 457856
num_examples: 2000
- name: ko
num_bytes: 459972
num_examples: 2000
- name: zh
num_bytes: 478780
num_examples: 2000
download_size: 2694334
dataset_size: 2827840
- config_name: Llama-2-7b-chat-hf
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 429595
num_examples: 2000
- name: es
num_bytes: 395137
num_examples: 2000
- name: fr
num_bytes: 338615
num_examples: 2000
- name: ja
num_bytes: 448313
num_examples: 2000
- name: ko
num_bytes: 429424
num_examples: 2000
- name: zh
num_bytes: 425094
num_examples: 2000
download_size: 2332672
dataset_size: 2466178
- config_name: Llama-2-13b-chat-hf
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: de
num_bytes: 476183
num_examples: 2000
- name: es
num_bytes: 481248
num_examples: 2000
- name: fr
num_bytes: 480349
num_examples: 2000
- name: ja
num_bytes: 475454
num_examples: 2000
- name: ko
num_bytes: 482906
num_examples: 2000
- name: zh
num_bytes: 492532
num_examples: 2000
download_size: 2755166
dataset_size: 2888672
---
# Dataset Card for PAWS-X MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Repository:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Paper:** [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://arxiv.org/abs/1908.11828)
- **Point of Contact:** [Yinfei Yang](yinfeiy@google.com)
### Dataset Summary
This dataset contains 23,659 **human** translated PAWS evaluation pairs and
296,406 **machine** translated training pairs in six typologically distinct
languages: French, Spanish, German, Chinese, Japanese, and Korean. All
translated pairs are sourced from examples in
[PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki).
For further details, see the accompanying paper:
[PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification](https://arxiv.org/abs/1908.11828)
This is a machine-translated version of the original dataset into English from each langauge.
### Supported Tasks and Leaderboards
It has been majorly used for paraphrase identification for English and other 6 languages namely French, Spanish, German, Chinese, Japanese, and Korean
### Languages
The dataset is in English, French, Spanish, German, Chinese, Japanese, and Korean
## Dataset Structure
### Data Instances
For en:
```
id : 1
sentence1 : In Paris , in October 1560 , he secretly met the English ambassador , Nicolas Throckmorton , asking him for a passport to return to England through Scotland .
sentence2 : In October 1560 , he secretly met with the English ambassador , Nicolas Throckmorton , in Paris , and asked him for a passport to return to Scotland through England .
label : 0
```
For fr:
```
id : 1
sentence1 : À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.
sentence2 : En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre.
label : 0
```
### Data Fields
All files are in tsv format with four columns:
Column Name | Data
:---------- | :--------------------------------------------------------
id | An ID that matches the ID of the source pair in PAWS-Wiki
sentence1 | The first sentence
sentence2 | The second sentence
label | Label for each pair
The source text of each translation can be retrieved by looking up the ID in the
corresponding file in PAWS-Wiki.
### Data Splits
The numbers of examples for each of the seven languages are shown below:
Language | Train | Dev | Test
:------- | ------: | -----: | -----:
en | 49,401 | 2,000 | 2,000
fr | 49,401 | 2,000 | 2,000
es | 49,401 | 2,000 | 2,000
de | 49,401 | 2,000 | 2,000
zh | 49,401 | 2,000 | 2,000
ja | 49,401 | 2,000 | 2,000
ko | 49,401 | 2,000 | 2,000
> **Caveat**: please note that the dev and test sets of PAWS-X are both sourced
> from the dev set of PAWS-Wiki. As a consequence, the same `sentence 1` may
> appear in both the dev and test sets. Nevertheless our data split guarantees
> that there is no overlap on sentence pairs (`sentence 1` + `sentence 2`)
> between dev and test.
## Dataset Creation
### Curation Rationale
Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. They remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. They provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT (Devlin et al., 2019) fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.
### Source Data
PAWS (Paraphrase Adversaries from Word Scrambling)
#### Initial Data Collection and Normalization
All translated pairs are sourced from examples in [PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki)
#### Who are the source language producers?
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean.
### Annotations
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
The paper mentions the translate team, especially Mengmeng Niu, for the help with the annotations.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
### Citation Information
```
@InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@gowtham1997](https://github.com/gowtham1997) for adding this dataset. |
dell-research-harvard/headlines-semantic-similarity | 2023-06-14T06:52:21.000Z | [
"task_categories:sentence-similarity",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-2.0",
"doi:10.57967/hf/0751",
"region:us"
] | dell-research-harvard | null | null | null | 6 | 386 | ---
license: cc-by-2.0
task_categories:
- sentence-similarity
language:
- en
pretty_name: HEADLINES
size_categories:
- 100M<n<1B
---
# Dataset Card for HEADLINES

## Dataset Description
- **Homepage:** [Dell Research homepage](https://dell-research-harvard.github.io/)
- **Repository:** [Github repository](https://github.com/dell-research-harvard)
- **Paper:** [arxiv submission](https://arxiv.org/abs/tbd)
- **Point of Contact:** [Melissa Dell](mailto:melissadell@fas.harvard.edu)
#### Dataset Summary
HEADLINES is a massive English-language semantic similarity dataset, containing 396,001,930 pairs of different headlines for the same newspaper article, taken from historical U.S. newspapers, covering the period 1920-1989.
#### Languages
The text in the dataset is in English.
## Dataset Structure
Each year in the dataset is divided into a distinct file (eg. 1952_headlines.json), giving a total of 70 files.
The data is presented in the form of clusters, rather than pairs to eliminate duplication of text data and minimise the storage size of the datasets. Below we give an example of how to convert the dataset into pairs.
#### Dataset Instances
An example from the HEADLINES dataset looks like:
```python
{
"headline": "FRENCH AND BRITISH BATTLESHIPS IN MEXICAN WATERS",
"group_id": 4
"date": "May-14-1920",
"state": "kansas",
}
```
#### Dataset Fields
- `headline`: headline text.
- `date`: the date of publication of the newspaper article, as a string in the form mmm-DD-YYYY.
- `state`: state of the newspaper that published the headline.
- `group_id`: a number that is shared with all other headlines for the same article. This number is unique across all year files.
## Usage
The whole dataset can be easily downloaded using the `datasets` library.
```
from datasets import load_dataset
dataset_dict = load_dataset('dell-research-harvard/headlines-semantic-similarity')
```
If you just want to load specific files, you can specify these in the command.
```
from datasets import load_dataset
load_dataset(
'dell-research-harvard/headlines-semantic-similarity',
data_files=["1929_headlines.json", "1989_headlines.json"]
)
```
## Dataset Creation
### Source Data
The dataset was constructed using a large corpus of newly digitized articles from off-copyright, local U.S. newspapers.
Many of these newspapers reprint articles from newswires, such as the Associated Press, but the headlines are written locally.
The dataset comprises different headlines for the same article.
#### Initial Data Collection and Normalization
To construct HEADLINES, we digitize front pages of off-copyright newspaper page scans, localizing and OCRing individual content regions like headlines and articles. The headlines, bylines, and article texts that form full articles span multiple bounding boxes - often arranged with complex layouts - and we associate them using a model that combines layout information and language understanding. Then, we use neural methods to accurately predict which articles come from the same underlying source, in the presence of noise and abridgement.
We remove all headline pairs that are below a Levenshtein edit distance, divided by the min length in the pair, of 0.1 from each other, with the aim of removing pairs that are exact duplicates up to OCR noise.
#### Who are the source language producers?
The text data was originally produced by journalists of local U.S. newspapers.
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
The dataset may contain information about individuals, to the extent that this is covered in the headlines of news stories. However we make no additional information about individuals publicly available.
### Data Description
The dataset contains 396,001,930 positive semantic similarity pairs, from 1920 to 1989.

It contains headlines from all 50 states.

## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to widen the range of language and topics for training semantic similarity models.
This will facilitate the study of semantic change across space and time.
Specific biases in the dataset are considered in the next section.
### Discussion of Biases
The headlines in the dataset may reflect attitudes and values from the period in which they were written, 1920-1989. This may include instances of racism, sexism and homophobia.
We also note that given that all the newspapers considered are from the U.S., the data is likely to present a Western perspective on the news stories of the day.
### Other Known Limitations
As the dataset is sourced from digitalised text, it contains some OCR errors.
## Additional information
### Licensing Information
HEADLINES is released under the Creative Commons CC-BY 2.0 license.
### Dataset curators
This dataset was created by Emily Silcock and Melissa Dell. For more information, see [Dell Research Harvard](https://dell-research-harvard.github.io/).
### Citation information
Citation coming soon.
|
result-kand2-sdxl-wuerst-karlo/db197d09 | 2023-09-26T15:16:30.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 386 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 170
num_examples: 10
download_size: 1327
dataset_size: 170
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "db197d09"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
L4NLP/LEval | 2023-09-01T09:23:50.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:multiple-choice",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"region:us"
] | L4NLP | A benchmark to evaluate long document understanding and generation ability of LLM | } | null | 8 | 385 | ---
license: gpl-3.0
task_categories:
- summarization
- question-answering
- multiple-choice
language:
- en
size_categories:
- 1K<n<10K
viewer: true
---
### *L-Eval: Instituting Standardized Evaluation for Long Context Language Models*
L-Eval is a comprehensive long-context language models evaluation suite with 18 long document tasks across multiple domains that require reasoning over long texts, including summarization, question answering, in-context learning with long CoT examples, topic retrieval, and paper writing assistance. L-Eval is a high-quality test set with 411 long documents and 2043 query-response pairs. All samples in L-Eval have been manually annotated and checked by the authors. There have been many studies exploring the expansion of context length in large models. However, it remains to be explored whether these methods perform well enough in downstream tasks and whether they can surpass previous methods based on retrieval or chunking.
We hope L-Eval could help researchers and developers track the progress of long-context language models (LCLMs) and understand the strengths/shortcomings of different methods.
Dataset list:
```
["coursera", "gsm100", "quality", "topic_retrieval_longchat", "tpo", "financial_qa", "gov_report_summ", "legal_contract_qa", "meeting_summ", "multidoc_qa", "narrative_qa", "natural_question", "news_summ", "paper_assistant", "patent_summ", "review_summ", "scientific_qa", "tv_show_summ"]
```
Detailed descriptions and how we collect the data can be found [https://github.com/OpenLMLab/LEval](https://github.com/OpenLMLab/LEval). |
Francesco/furniture-ngpea | 2023-03-30T09:12:40.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 384 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': furniture
'1': Chair
'2': Sofa
'3': Table
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: furniture-ngpea
tags:
- rf100
---
# Dataset Card for furniture-ngpea
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/furniture-ngpea
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
furniture-ngpea
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/furniture-ngpea
### Citation Information
```
@misc{ furniture-ngpea,
title = { furniture ngpea Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/furniture-ngpea } },
url = { https://universe.roboflow.com/object-detection/furniture-ngpea },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
ai4privacy/pii-masking-65k | 2023-08-27T04:42:54.000Z | [
"size_categories:10K<n<100K",
"language:en",
"language:fr",
"language:de",
"language:it",
"legal",
"business",
"psychology",
"privacy",
"region:us"
] | ai4privacy | null | null | null | 12 | 384 | ---
language:
- en
- fr
- de
- it
tags:
- legal
- business
- psychology
- privacy
size_categories:
- 10K<n<100K
---
# Purpose and Features
The purpose of the model and dataset is to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs.
The model is a fine-tuned version of "Distilled BERT", a smaller and faster version of BERT. It was adapted for the task of token classification based on the largest to our knowledge open-source PII masking dataset, which we are releasing simultaneously. The model size is 62 million parameters. The original encoding of the parameters yields a model size of 268 MB, which is compressed to 43MB after parameter quantization. The models are available in PyTorch, tensorflow, and tensorflow.js
The dataset is composed of ~43’000 observations. Each row starts with a natural language sentence that includes placeholders for PII and could plausibly be written to an AI assistant. The placeholders are then filled in with mocked personal information and tokenized with the BERT tokenizer. We label the tokens that correspond to PII, serving as the ground truth to train our model.
The dataset covers a range of contexts in which PII can appear. The sentences span 58 sensitive data types (~117 token classes), targeting **125 discussion subjects / use cases** split across business, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...).
Key facts:
- Currently 5.6m tokens with 65k PII examples.
- Multiple languages
- Human-in-the-loop validated high quality dataset
- Synthetic data generated using proprietary algorithms
- Adapted from DistilBertForTokenClassification
- Framework PyTorch
- 8 bit quantization
# Token distribution across PII classes
There are 2 dataset releasees:
- Original release:
- [PII43k_original.jsonl](PII43k_original.jsonl)
- New release with balanced token distribution:
- [english_balanced_10k.jsonl](english_balanced_10k.jsonl)
- [french_balanced_5k.jsonl](french_balanced_5k.jsonl)
- [german_balanced_3k.jsonl](german_balanced_3k.jsonl)
- [italian_balanced_3k.jsonl](italian_balanced_3k.jsonl)
The new release **balances the distribution of tokens across the PII classes** covered by the dataset.
This graph shows the distribution of observations across the different PII classes in the new release:

This is an important improvement, because the old release focused on just a few classes of PII and didn't provide enough examples of the other ones.
This graph shows the unbalanced distribution of observations across the different PII classes in the old release:

Current counts of tokens per example:

# Performance evaluation
| Test Precision | Test Recall | Test Accuracy |
|:-:|:-:|:-:|
# Community Engagement:
Newsletter & updates: www.Ai4privacy.com
- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)
- Integrations with already existing open source solutions
# Roadmap and Future Development
- Multilingual benchmarking
- Extended integrations
- Continuously increase the training set
- Further optimisation to the model to reduce size and increase generalisability
- Next released major update is planned for the 14th of July (subscribe to newsletter for updates)
# Use Cases and Applications
**Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.
**Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.
**Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.
**Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.
**Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.
**Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.
**Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.
**Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.
**Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.
**Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.
(...and whatever else your creative mind can think of)
# Support and Maintenance
AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/). |
pubmed | 2022-12-22T07:57:43.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"citation-estimation",
"region:us"
] | null | NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information. | Courtesy of the U.S. National Library of Medicine. | null | 29 | 383 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modeling
- text-scoring
- topic-classification
paperswithcode_id: pubmed
pretty_name: PubMed
tags:
- citation-estimation
dataset_info:
- config_name: '2023'
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
splits:
- name: train
num_bytes: 52199025303
num_examples: 34960700
download_size: 41168762331
dataset_size: 52199025303
---
# Dataset Card for PubMed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]()
- **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]()
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
Bear in mind the data comes from XML that have various tags that are hard to reflect
in a concise JSON format. Tags and list are kind of non "natural" to XML documents
leading this library to make some choices regarding data. "Journal" info was dropped
altogether as it would have led to many fields being empty all the time.
The hierarchy is also a bit unnatural but the choice was made to keep as close as
possible to the original data for future releases that may change schema from NLM's side.
Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
(All the fields will be present all the time, but only some will be filled)
### Data Instances
```json
{
"MedlineCitation": {
"PMID": 0,
"DateCompleted": {"Year": 0, "Month": 0, "Day": 0},
"NumberOfReferences": 0,
"DateRevised": {"Year": 0, "Month": 0, "Day": 0},
"Article": {
"Abstract": {"AbstractText": "Some abstract (can be missing)" },
"ArticleTitle": "Article title",
"AuthorList": {"Author": [
{"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""}
{"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""}
]},
"Language": "en",
"GrantList": {
"Grant": [],
},
"PublicationTypeList": {"PublicationType": []},
},
"MedlineJournalInfo": {"Country": "France"},
"ChemicalList": {"Chemical": [{
"RegistryNumber": "XX",
"NameOfSubstance": "Methanol"
}]},
"CitationSubset": "AIM",
"MeshHeadingList": {
"MeshHeading": [],
},
},
"PubmedData": {
"ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"},
"PublicationStatus": "ppublish",
"History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]},
"ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}],
},
}
```
### Data Fields
Main Fields will probably interest people are:
- "MedlineCitation" > "Article" > "AuthorList" > "Author"
- "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
- "MedlineCitation" > "Article" > "Article Title"
- "MedlineCitation" > "ChemicalList" > "Chemical"
- "MedlineCitation" > "NumberOfReferences"
### Data Splits
There are no splits in this dataset. It is given as is.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]()
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]()
### Citation Information
[Courtesy of the U.S. National Library of Medicine](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
|
yxchar/imdb-tlm | 2021-11-04T18:01:06.000Z | [
"region:us"
] | yxchar | null | null | null | 0 | 383 | Entry not found |
Bingsu/Cat_and_Dog | 2023-01-26T10:48:25.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | Bingsu | null | null | null | 2 | 383 | ---
language:
- en
license:
- cc0-1.0
pretty_name: Cat and Dog
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 166451650.0
num_examples: 8000
- name: test
num_bytes: 42101650.0
num_examples: 2000
download_size: 227859268
dataset_size: 208553300.0
size_in_bytes: 436412568.0
---
## Dataset Description
- **Homepage:** [Cat and Dog](https://www.kaggle.com/datasets/tongpython/cat-and-dog)
- **Download Size** 217.30 MiB
- **Generated Size** 198.89 MiB
- **Total Size** 416.20 MiB
### Dataset Summary
A dataset from [kaggle](https://www.kaggle.com/datasets/tongpython/cat-and-dog) with duplicate data removed.
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
### Class Label Mappings:
```
{
"cat": 0,
"dog": 1,
}
```
### Data Splits
| | train | test |
|---------------|-------|-----:|
| # of examples | 8000 | 2000 |
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Cat_and_Dog")
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'labels'],
num_rows: 2000
})
})
>>> dataset["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=2, names=['cat', 'dog'], id=None)}
``` |
gamino/wiki_medical_terms | 2022-12-20T16:23:58.000Z | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"medical",
"conditions",
"region:us"
] | gamino | null | null | null | 19 | 383 | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- gpl-3.0
multilinguality: []
pretty_name: Medical terms and their wikipedia text
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- medical
- conditions
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
### Dataset Summary
This data set contains over 6,000 medical terms and their wikipedia text. It is intended to be used on a downstream task that requires medical terms and their wikipedia explanation.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
### Citation Information
[More Information Needed]
|
result-kand2-sdxl-wuerst-karlo/7aa2df49 | 2023-09-26T16:34:15.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 383 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 172
num_examples: 10
download_size: 1339
dataset_size: 172
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7aa2df49"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/1b874213 | 2023-09-26T16:50:27.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 383 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 161
num_examples: 10
download_size: 1306
dataset_size: 161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "1b874213"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/b6112e1b | 2023-09-26T16:29:48.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 382 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 166
num_examples: 10
download_size: 1318
dataset_size: 166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b6112e1b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
colbertv2/lotte | 2022-08-04T17:55:59.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2112.01488",
"region:us"
] | colbertv2 | LoTTE Passages Dataset for ColBERTv2 | @inproceedings{santhanam-etal-2022-colbertv2,
title = "{C}ol{BERT}v2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Santhanam, Keshav and
Khattab, Omar and
Saad-Falcon, Jon and
Potts, Christopher and
Zaharia, Matei",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.272",
pages = "3715--3734",
abstract = "Neural information retrieval (IR) has greatly advanced search and other knowledge-intensive language tasks. While many neural IR methods encode queries and documents into single-vector representations, late interaction models produce multi-vector representations at the granularity of each token and decompose relevance modeling into scalable token-level computations. This decomposition has been shown to make late interaction more effective, but it inflates the space footprint of these models by an order of magnitude. In this work, we introduce Maize, a retriever that couples an aggressive residual compression mechanism with a denoised supervision strategy to simultaneously improve the quality and space footprint of late interaction. We evaluate Maize across a wide range of benchmarks, establishing state-of-the-art quality within and outside the training domain while reducing the space footprint of late interaction models by 6{--}10x.",
} | null | 0 | 381 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'Lotte queries from ColBERTv2: Effective and Efficient Retrieval via
Lightweight Late Interaction'
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
Queries for Lotte dataset from [ColBERTv2: Effective and Efficient Retrieval via
Lightweight Late Interaction](https://arxiv.org/abs/2112.01488) |
cyrilzhang/financial_phrasebank_split | 2023-01-17T21:26:08.000Z | [
"region:us"
] | cyrilzhang | null | null | null | 1 | 381 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
splits:
- name: train
num_bytes: 611259.9339661576
num_examples: 4361
- name: test
num_bytes: 67980.06603384235
num_examples: 485
download_size: 418548
dataset_size: 679240.0
---
# Dataset Card for "financial_phrasebank_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Bingsu/ko_alpaca_data | 2023-03-30T23:21:40.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-nc-4.0",
"region:us"
] | Bingsu | null | null | null | 11 | 381 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13791136
num_examples: 49620
download_size: 8491044
dataset_size: 13791136
license: cc-by-nc-4.0
language:
- ko
pretty_name: ko-alpaca-data
size_categories:
- 10K<n<100K
task_categories:
- text-generation
---
# Dataset Card for "ko_alpaca_data"
## Dataset Description
- **Repository:** [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
- **Huggingface:** [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
- **Size of downloaded dataset files:** 8.10 MB
- **Size of the generated dataset:** 13.15 MB
### Dataset Summary
Korean translation of [alpaca data](https://huggingface.co/datasets/tatsu-lab/alpaca).
repository: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)<br>
huggingface: [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
1. Translate dataset
Translated 'instruction' and 'input' in the dataset via the DeepL API, except for 'output', which we did not translate because it is the output of OpenAI's `text-davinci-003` model.
2. Generate output data
Then, using the instruction and input, generate output data via the OpenAI ChatGPT API (gpt-3.5-turbo).
Below is the prompt we used to generate the answer.
```python
PROMPT = """\
다양한 작업에 대한 답변을 생성해주세요. 이러한 작업 지침은 ChatGPT 모델에 주어지며, ChatGPT 모델이 지침을 완료하는지 평가합니다.
요구 사항은 다음과 같습니다:
1. 다양성을 극대화하기 위해 각 지시에 대해 동사를 반복하지 않도록 하세요.
2. 지시에 사용되는 언어도 다양해야 합니다. 예를 들어, 질문과 명령형 지시를 결합해야 합니다.
3. 지시 사항의 유형이 다양해야 합니다. 목록에는 개방형 생성, 분류, 편집 등과 같은 다양한 유형의 작업이 포함되어야 합니다.
2. GPT 언어 모델은 지시를 완료할 수 있어야 합니다. 예를 들어 어시스턴트에게 시각적 또는 오디오 출력을 생성하도록 요청하지 마세요. 또 다른 예로, 어시스턴트가 어떤 작업도 수행할 수 없으므로 오후 5시에 깨우거나 미리 알림을 설정하도록 요청하지 마세요.
3. 답변은 한국어로 작성해야 합니다.
4. 답변을 1~2문장으로 작성하세요. 명령문이나 질문도 허용됩니다.
5. 지시 사항에 대한 적절한 입력을 생성해야 합니다. 입력 필드에는 지시에 대한 구체적인 예가 포함되어야 합니다. 실제 데이터를 포함해야 하며 단순한 자리 표시자를 포함해서는 안 됩니다. 입력은 지시 사항을 어렵게 만들 수 있는 상당한 내용을 제공해야 하지만 100단어를 넘지 않는 것이 이상적입니다.
6. 일부 지시사항은 추가 입력이 있고, 일부 지시에는 입력 필드가 비어있습니다. 예를 들어 "세계에서 가장 높은 봉우리는 무엇인가?"라는 일반적인 정보를 묻는 지시의 경우 구체적인 맥락을 제공할 필요가 없어, 입력 필드가 비어있을 수 있습니다.
7. 출력은 명령어와 입력에 대한 적절한 응답이어야 합니다.
아래에 10개의 명령어와 입력(옵션)에 따라 적절한 응답을 생성하세요.
응답은 아래와 같은 형식으로 10가지를 0번 부터 9번 까지, 번호에 따라 해당 번호의 명령어와 입력에 알맞게 작성하세요.
각 응답 사이는 ### 으로 내용을 분리해주세요.
응답0: 첫 번째 응답내용###
응답1: 두 번째 응답내용###
...
응답9: 마지막 응답내용"""
```
### Lisence
CC-BY-NC-4.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 49620 |
\# Note that the number is not the same as the original data(52002)
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("Bingsu/ko_alpaca_data", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 49620
})
```
```python
>>> ds[0]
{'instruction': '건강을 유지하기 위한 세 가지 팁을 알려주세요.',
'input': '',
'output': '세 가지 팁은 아침식사를 꼭 챙기며, 충분한 수면을 취하고, 적극적으로 운동을 하는 것입니다.'}
``` |
rafaelpadilla/coco2017 | 2023-08-11T23:02:22.000Z | [
"task_categories:object-detection",
"annotations_creators:expert-generated",
"size_categories:100K<n<1M",
"language:en",
"arxiv:1405.0312",
"region:us"
] | rafaelpadilla | This dataset contains all COCO 2017 images and annotations split in training (118287 images) and validation (5000 images). | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
Piotr Doll{\'{a}}r and
C. Lawrence Zitnick},
title = {Microsoft {COCO:} Common Objects in Context},
journal = {CoRR},
volume = {abs/1405.0312},
year = {2014},
url = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint = {1405.0312},
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 1 | 381 | ---
pretty_name: COCO2017
annotations_creators:
- expert-generated
size_categories:
- 100K<n<1M
language:
- en
task_categories:
- object-detection
---
# Dataset Card for Dataset Name
This dataset includes **COCO 2017** only.
COCO 2014 and 2015 will be included soon.
## Dataset Description
- **Homepage:** https://cocodataset.org/
- **Repository:** https://github.com/cocodataset/cocoapi
- **Paper:** [Microsoft COCO: Common Objects in Context](https://arxiv.org/abs/1405.0312)
### Dataset Summary
COCO (Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset. It contains over 200,000 labeled images with over 80 category labels. It includes complex, everyday scenes with common objects in their natural context.
This dataset covers only the "object detection" part of the COCO dataset. But some features and specifications for the full COCO dataset:
- Object segmentation
- Recognition in context
- Superpixel stuff segmentation
- 330K images (>200K labeled)
- 1.5 million object instances
- 80 object categories
- 91 stuff categories
- 5 captions per image
- 250,000 people with keypoints
### Data Splits
- **Training set ("train")**: 118287 images annotated with 860001 bounding boxes in total.
- **Validation set ("val")**: 5000 images annotated with 36781 bounding boxes in total.
- **92 classes**: "None", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "street sign", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "hat", "backpack", "umbrella", "shoe", "eye glasses", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "plate", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "mirror", "dining table", "window", "desk", "toilet", "door", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "blender", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", "hair brush"
- **But only 80 classes have with annotations**: "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"
### Boxes format:
For the object detection set of COCO dataset, the ground-truth bounding boxes are provided in the following format: `x, y, width, height` in absolute coordinates.
### Curation Rationale
COCO dataset was curated with the goal of advancing the state of the art in many tasks, such as object detection, dense pose, keypoints, segmentation and image classification.
### Licensing Information
The annotations in this dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
Mode details at: https://cocodataset.org/#termsofuse
### Loading dataset
You can load COCO 2017 dataset by calling:
```
from datasets import load_dataset
# Full dataset
dataset = load_dataset("rafaelpadilla/coco2017")
print(dataset)
>> DatasetDict({
>> train: Dataset({
>> features: ['image', 'image_id', 'objects'],
>> num_rows: 118287
>> })
>> val: Dataset({
>> features: ['image', 'image_id', 'objects'],
>> num_rows: 5000
>> })
>> })
# Training set only
dataset = load_dataset("rafaelpadilla/coco2017", split="train")
# Validation set only
dataset = load_dataset("rafaelpadilla/coco2017", split="val")
```
### COCODataset Class
We offer the dataset class `COCODataset` that extends VisionDataset to represents images and annotations of COCO. To use it, you need to install coco2017 package. For that, follow the steps below:
1. Create and activate an environment:
```
conda create -n coco2017 python=3.11
conda activate coco2017
```
2. Install cocodataset package:
```
pip install git+https://huggingface.co/datasets/rafaelpadilla/coco2017@main
```
or alternatively:
```
git clone https://huggingface.co/datasets/rafaelpadilla/coco2017
cd coco2017
pip install .
```
3. Now you can import `COCODataset` class into your Python code by:
```
from cocodataset import COCODataset
```
### Citation Information
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
pages={740--755},
year={2014},
organization={Springer}
}
### Contributions
Tsung-Yi Lin Google Brain
Genevieve Patterson MSR, Trash TV
Matteo R. Ronchi Caltech
Yin Cui Google
Michael Maire TTI-Chicago
Serge Belongie Cornell Tech
Lubomir Bourdev WaveOne, Inc.
Ross Girshick FAIR
James Hays Georgia Tech
Pietro Perona Caltech
Deva Ramanan CMU
Larry Zitnick FAIR
Piotr Dollár FAIR
|
SetFit/bbc-news | 2022-01-18T05:58:34.000Z | [
"region:us"
] | SetFit | null | null | null | 4 | 380 | # BBC News Topic Classification
Dataset on [BBC News Topic Classification](https://www.kaggle.com/yufengdev/bbc-text-categorization/data): 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech. |
MLCommons/ml_spoken_words | 2022-12-06T11:11:02.000Z | [
"task_categories:audio-classification",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:extended|common_voice",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fr",
"language:fy",
"language:ga",
"language:gn",
"language:ha",
"language:ia",
"language:id",
"language:it",
"language:ka",
"language:ky",
"language:lt",
"language:lv",
"language:mn",
"language:mt",
"language:nl",
"language:or",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sk",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:tt",
"language:uk",
"language:vi",
"language:zh",
"license:cc-by-4.0",
"other-keyword-spotting",
"region:us"
] | MLCommons | Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset. | @inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
} | null | 16 | 380 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ga
- gn
- ha
- ia
- id
- it
- ka
- ky
- lt
- lv
- mn
- mt
- nl
- or
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sk
- sl
- sv
- ta
- tr
- tt
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- extended|common_voice
task_categories:
- audio-classification
task_ids: []
pretty_name: Multilingual Spoken Words
language_bcp47:
- fy-NL
- ga-IE
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
tags:
- other-keyword-spotting
---
# Dataset Card for Multilingual Spoken Words
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/
- **Repository:** https://github.com/harvard-edge/multilingual_kws
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset.
Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like
`"{lang}_{format}"`, so to load, for example, Tatar in wav format do:
```python
ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav")
```
To download multiple languages in a single dataset pass list of languages to `languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
To download a specific format pass it to the `format` argument (default format is `wav`):
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus")
```
Note that each time you provide different sets of languages,
examples are generated from scratch even if you already provided one or several of them before
because custom configurations are created each time (the data is **not** redownloaded though).
### Supported Tasks and Leaderboards
Keyword spotting, Spoken term search
### Languages
The dataset is multilingual. To specify several languages to download pass a list of them to the
`languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
The dataset contains data for the following languages:
Low-resourced (<10 hours):
* Arabic (0.1G, 7.6h)
* Assamese (0.9M, 0.1h)
* Breton (69M, 5.6h)
* Chuvash (28M, 2.1h)
* Chinese (zh-CN) (42M, 3.1h)
* Dhivehi (0.7M, 0.04h)
* Frisian (0.1G, 9.6h)
* Georgian (20M, 1.4h)
* Guarani (0.7M, 1.3h)
* Greek (84M, 6.7h)
* Hakha Chin (26M, 0.1h)
* Hausa (90M, 1.0h)
* Interlingua (58M, 4.0h)
* Irish (38M, 3.2h)
* Latvian (51M, 4.2h)
* Lithuanian (21M, 0.46h)
* Maltese (88M, 7.3h)
* Oriya (0.7M, 0.1h)
* Romanian (59M, 4.5h)
* Sakha (42M, 3.3h)
* Slovenian (43M, 3.0h)
* Slovak (31M, 1.9h)
* Sursilvan (61M, 4.8h)
* Tamil (8.8M, 0.6h)
* Vallader (14M, 1.2h)
* Vietnamese (1.2M, 0.1h)
Medium-resourced (>10 & <100 hours):
* Czech (0.3G, 24h)
* Dutch (0.8G, 70h)
* Estonian (0.2G, 19h)
* Esperanto (1.3G, 77h)
* Indonesian (0.1G, 11h)
* Kyrgyz (0.1G, 12h)
* Mongolian (0.1G, 12h)
* Portuguese (0.7G, 58h)
* Swedish (0.1G, 12h)
* Tatar (4G, 30h)
* Turkish (1.3G, 29h)
* Ukrainian (0.2G, 18h)
Hig-resourced (>100 hours):
* Basque (1.7G, 118h)
* Catalan (8.7G, 615h)
* English (26G, 1957h)
* French (9.3G, 754h)
* German (14G, 1083h)
* Italian (2.2G, 155h)
* Kinyarwanda (6.1G, 422h)
* Persian (4.5G, 327h)
* Polish (1.8G, 130h)
* Russian (2.1G, 137h)
* Spanish (4.9G, 349h)
* Welsh (4.5G, 108h)
## Dataset Structure
### Data Instances
```python
{'file': 'абзар_common_voice_tt_17737010.opus',
'is_valid': True,
'language': 0,
'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38',
'gender': 0,
'keyword': 'абзар',
'audio': {'path': 'абзар_common_voice_tt_17737010.opus',
'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ...,
2.03458695e-34, 2.03458695e-34, 2.03458695e-34]),
'sampling_rate': 48000}}
```
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance. Makes sense only when providing multiple languages to the
dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data comes form Common Voice dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
he dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
### Citation Information
```
@inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
mstz/wine | 2023-04-07T15:11:56.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"wine",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 2 | 378 | ---
language:
- en
tags:
- wine
- tabular_classification
- binary_classification
pretty_name: Wine quality
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- wine
license: cc
---
# Wine
The [Wine dataset](https://www.kaggle.com/datasets/ghassenkhaled/wine-quality-data) from Kaggle.
Classify wine as red or white.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| wine | Binary classification | Is this red wine? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/wine")["train"]
``` |
pasinit/xlwic | 2022-10-25T09:54:22.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:bg",
"language:zh",
"language:hr",
"language:da",
"language:nl",
"language:et",
"language:fa",
"language:ja",
"language:ko",
"language:it",
"language:fr",
"language:de",
"license:cc-by-nc-4.0",
"region:us"
] | pasinit | A system's task on any of the XL-WiC datasets is to identify the intended meaning of a word in a context of a given language. XL-WiC is framed as a binary classification task. Each instance in XL-WiC has a target word w, either a verb or a noun, for which two contexts are provided. Each of these contexts triggers a specific meaning of w. The task is to identify if the occurrences of w in the two contexts correspond to the same meaning or not.
XL-WiC provides dev and test sets in the following 12 languages:
Bulgarian (BG)
Danish (DA)
German (DE)
Estonian (ET)
Farsi (FA)
French (FR)
Croatian (HR)
Italian (IT)
Japanese (JA)
Korean (KO)
Dutch (NL)
Chinese (ZH)
and training sets in the following 3 languages:
German (DE)
French (FR)
Italian (IT) | @inproceedings{raganato-etal-2020-xl-wic,
title={XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization},
author={Raganato, Alessandro and Pasini, Tommaso and Camacho-Collados, Jose and Pilehvar, Mohammad Taher},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={7193--7206},
year={2020}
} | null | 4 | 377 | ---
annotations_creators:
- expert-generated
extended:
- original
language_creators:
- found
language:
- en
- bg
- zh
- hr
- da
- nl
- et
- fa
- ja
- ko
- it
- fr
- de
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
---
# XL-WiC
Huggingface dataset for the XL-WiC paper [https://www.aclweb.org/anthology/2020.emnlp-main.584.pdf](https://www.aclweb.org/anthology/2020.emnlp-main.584.pdf).
Please refer to the official [website](https://pilehvar.github.io/xlwic/) for more information.
## Configurations
When loading one of the XL-WSD datasets one has to specify the training language and the target language (on which dev and test will be performed).
Please refer to [Languages](#languages) section to see in which languages training data is available.
For example, we can load the dataset having English as training language and Italian as target language as follows:
```python
from datasets import load_dataset
dataset = load_dataset('pasinit/xlwic', 'en_it')
```
## Languages
**Training data**
- en (English)
- fr (French)
- de (German)
- it (Italian)
**Dev & Test data**
- fr (French)
- de (German)
- it (Italian)
- bg (Bulgarian)
- zh (Chinese)
- hr (Croatian)
- da (Danish)
- nl (Dutch)
- et (Estonian)
- fa (Farsi)
- ja (Japanesse)
- ko (Korean)
|
mariosasko/test_imagefolder_with_metadata | 2022-06-28T12:59:23.000Z | [
"region:us"
] | mariosasko | null | null | null | 0 | 377 | Entry not found |
mteb/mind_small | 2022-08-04T23:00:59.000Z | [
"region:us"
] | mteb | null | null | null | 0 | 376 | The `test` split is the `validation` split of [MIND](https://msnews.github.io/). Labels for the original `test` split are unavailable.
Thus, we renamed it to test for consistency in the MTEB benchmark. |
labr | 2023-01-25T14:34:10.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
] | null | This dataset contains over 63,000 book reviews in Arabic.It is the largest sentiment analysis dataset for Arabic to-date.The book reviews were harvested from the website Goodreads during the month or March 2013.Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review. | @inproceedings{aly2013labr,
title={Labr: A large scale arabic book reviews dataset},
author={Aly, Mohamed and Atiya, Amir},
booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={494--498},
year={2013}
} | null | 0 | 375 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: labr
pretty_name: LABR
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
config_name: plain_text
splits:
- name: train
num_bytes: 7051103
num_examples: 11760
- name: test
num_bytes: 1703399
num_examples: 2935
download_size: 39953712
dataset_size: 8754502
---
# Dataset Card for LABR
## Table of Contents
- [Dataset Card for LABR](#dataset-card-for-labr)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [LABR](https://github.com/mohamedadaly/LABR)
- **Paper:** [LABR: Large-scale Arabic Book Reviews Dataset](https://aclanthology.org/P13-2088/)
- **Point of Contact:** [Mohammed Aly](mailto:mohamed@mohamedaly.info)
### Dataset Summary
This dataset contains over 63,000 book reviews in Arabic. It is the largest sentiment analysis dataset for Arabic to-date. The book reviews were harvested from the website Goodreads during the month or March 2013. Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review.
### Supported Tasks and Leaderboards
The dataset was published on this [paper](https://www.aclweb.org/anthology/P13-2088.pdf).
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises a rating from 1 to 5 where the higher the rating the better the review.
### Data Fields
- `text` (str): Review text.
- `label` (int): Review rating.
### Data Splits
The data is split into a training and testing. The split is organized as the following
| | train | test |
|---------- |-------:|------:|
|data split | 11,760 | 2,935 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
downloaded over 220,000 reviews from the
book readers social network www.goodreads.com
during the month of March 2013
#### Who are the source language producers?
Reviews.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{aly2013labr,
title={Labr: A large scale arabic book reviews dataset},
author={Aly, Mohamed and Atiya, Amir},
booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={494--498},
year={2013}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset. |
nielsr/XFUN | 2022-09-18T10:57:50.000Z | [
"region:us"
] | nielsr | null | null | null | 3 | 375 | Entry not found |
yxchar/chemprot-tlm | 2021-11-04T22:59:08.000Z | [
"region:us"
] | yxchar | null | null | null | 0 | 375 | Entry not found |
result-kand2-sdxl-wuerst-karlo/02511ac7 | 2023-09-26T22:38:28.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 375 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 187
num_examples: 10
download_size: 1369
dataset_size: 187
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "02511ac7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cohere/wikipedia-22-12 | 2023-02-22T15:58:09.000Z | [
"region:us"
] | Cohere | null | null | null | 24 | 374 | This dataset contains a pre-processed version from Wikipedia suitable for semantic search.
You can load the dataset like this:
```python
from datasets import load_dataset
lang = 'en'
data = load_dataset(f"Cohere/wikipedia-22-12", lang, split='train', streaming=True)
for row in data:
print(row)
break
```
This will load the dataset in a streaming mode (so that you don't need to download the whole dataset) and you can process it row-by-row.
The articles are splitted into paragraphs. Further, for each article we added statistics on the page views in 2022 as well as in how many other languages an article is available.
The dataset is sorted by page views, so that the most popular Wikipedia articles come first. So if you e.g. read the top-100k rows, you get quite a good coverage on topics that
are broadly interesting for people.
## Semantic Search Embeddings
We also provide versions where documents have been embedded using the [cohere multilingual embedding model](https://txt.cohere.ai/multilingual/),
e.g. [wikipedia-22-12-en-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings) contains the paragraphs and their respective embeddings for English.
You can find the embeddings for other languages in the datasets `wikipedia-22-12-{lang}-embeddings`.
## Dataset Creation
The [XML data dumps](https://dumps.wikimedia.org/backup-index.html) from December 20th, 2022 where downloaded and processed
with [wikiextractor](https://github.com/attardi/wikiextractor) (with Version: 2.75) and the following command:
```
python WikiExtractor.py --json -s --lists ../dumps/dewiki-20210101-pages-articles.xml.bz2 -o text_de
```
To count in how many languages an article is available, we downloaded the SQL files with language links from:
```
https://dumps.wikimedia.org/{lang}wiki/{datestr}/{filename}
```
And processed the SQL file to read for each article the outbound links.
Pageviews where downloaded from:
```
https://dumps.wikimedia.org/other/pageviews/{year}/{year}-{month_str}/pageviews-{year}{month_str}{day_str}-{hour_str}0000.gz
```
We downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive
a very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.
Code to compute the page views was:
```python
import gzip
import sys
from collections import Counter, defaultdict
import math
import tqdm
import json
title_views = {}
#Score: Harmonic mean (View_Day_1 * View_Day_2 * View_day_3)
# Add log for better numerical stabilitiy
# Add +1 to avoid log(0)
# Compare the sum, so that days without view are counted as 0 views
for filepath in tqdm.tqdm(sys.argv[1:]):
with gzip.open(filepath, "rt") as fIn:
for line in fIn:
splits = line.strip().split()
if len(splits) == 4:
lang, title, views, _ = line.strip().split()
lang = lang.lower()
if lang.endswith(".m"): #Add mobile page scores to main score
lang = lang[0:-2]
if lang.count(".") > 0:
continue
if lang not in title_views:
title_views[lang] = {}
if title not in title_views[lang]:
title_views[lang][title] = 0.0
title_views[lang][title] += math.log(int(views)+1)
#Save results
for lang in title_views:
with open(f"pageviews_summary/{lang}.json", "w") as fOut:
fOut.write(json.dumps(title_views[lang]))
```
We filter out paragraphs that start with `BULLET::::`, `Section::::`, `<templatestyles`, or `[[File:`.
Further, we also only include paragraphs with at least 100 characters (using Python len method=.
|
project-test/graduation_mobile_subset | 2023-09-05T07:39:59.000Z | [
"region:us"
] | project-test | null | null | null | 0 | 374 | Entry not found |
kuroneko5943/stock11 | 2023-01-16T04:11:18.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:apache-2.0",
"stock",
"region:us"
] | kuroneko5943 | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | \ | null | 5 | 372 | ---
annotations_creators:
- machine-generated
language:
- zh
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stock11
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- stock
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- |
Multimodal-Fatima/StanfordCars_test | 2023-06-12T02:33:45.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 372 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': am general hummer suv 2000
'1': acura rl sedan 2012
'2': acura tl sedan 2012
'3': acura tl type-s 2008
'4': acura tsx sedan 2012
'5': acura integra type r 2001
'6': acura zdx hatchback 2012
'7': aston martin v8 vantage convertible 2012
'8': aston martin v8 vantage coupe 2012
'9': aston martin virage convertible 2012
'10': aston martin virage coupe 2012
'11': audi rs 4 convertible 2008
'12': audi a5 coupe 2012
'13': audi tts coupe 2012
'14': audi r8 coupe 2012
'15': audi v8 sedan 1994
'16': audi 100 sedan 1994
'17': audi 100 wagon 1994
'18': audi tt hatchback 2011
'19': audi s6 sedan 2011
'20': audi s5 convertible 2012
'21': audi s5 coupe 2012
'22': audi s4 sedan 2012
'23': audi s4 sedan 2007
'24': audi tt rs coupe 2012
'25': bmw activehybrid 5 sedan 2012
'26': bmw 1 series convertible 2012
'27': bmw 1 series coupe 2012
'28': bmw 3 series sedan 2012
'29': bmw 3 series wagon 2012
'30': bmw 6 series convertible 2007
'31': bmw x5 suv 2007
'32': bmw x6 suv 2012
'33': bmw m3 coupe 2012
'34': bmw m5 sedan 2010
'35': bmw m6 convertible 2010
'36': bmw x3 suv 2012
'37': bmw z4 convertible 2012
'38': bentley continental supersports conv. convertible 2012
'39': bentley arnage sedan 2009
'40': bentley mulsanne sedan 2011
'41': bentley continental gt coupe 2012
'42': bentley continental gt coupe 2007
'43': bentley continental flying spur sedan 2007
'44': bugatti veyron 16.4 convertible 2009
'45': bugatti veyron 16.4 coupe 2009
'46': buick regal gs 2012
'47': buick rainier suv 2007
'48': buick verano sedan 2012
'49': buick enclave suv 2012
'50': cadillac cts-v sedan 2012
'51': cadillac srx suv 2012
'52': cadillac escalade ext crew cab 2007
'53': chevrolet silverado 1500 hybrid crew cab 2012
'54': chevrolet corvette convertible 2012
'55': chevrolet corvette zr1 2012
'56': chevrolet corvette ron fellows edition z06 2007
'57': chevrolet traverse suv 2012
'58': chevrolet camaro convertible 2012
'59': chevrolet hhr ss 2010
'60': chevrolet impala sedan 2007
'61': chevrolet tahoe hybrid suv 2012
'62': chevrolet sonic sedan 2012
'63': chevrolet express cargo van 2007
'64': chevrolet avalanche crew cab 2012
'65': chevrolet cobalt ss 2010
'66': chevrolet malibu hybrid sedan 2010
'67': chevrolet trailblazer ss 2009
'68': chevrolet silverado 2500hd regular cab 2012
'69': chevrolet silverado 1500 classic extended cab 2007
'70': chevrolet express van 2007
'71': chevrolet monte carlo coupe 2007
'72': chevrolet malibu sedan 2007
'73': chevrolet silverado 1500 extended cab 2012
'74': chevrolet silverado 1500 regular cab 2012
'75': chrysler aspen suv 2009
'76': chrysler sebring convertible 2010
'77': chrysler town and country minivan 2012
'78': chrysler 300 srt-8 2010
'79': chrysler crossfire convertible 2008
'80': chrysler pt cruiser convertible 2008
'81': daewoo nubira wagon 2002
'82': dodge caliber wagon 2012
'83': dodge caliber wagon 2007
'84': dodge caravan minivan 1997
'85': dodge ram pickup 3500 crew cab 2010
'86': dodge ram pickup 3500 quad cab 2009
'87': dodge sprinter cargo van 2009
'88': dodge journey suv 2012
'89': dodge dakota crew cab 2010
'90': dodge dakota club cab 2007
'91': dodge magnum wagon 2008
'92': dodge challenger srt8 2011
'93': dodge durango suv 2012
'94': dodge durango suv 2007
'95': dodge charger sedan 2012
'96': dodge charger srt-8 2009
'97': eagle talon hatchback 1998
'98': fiat 500 abarth 2012
'99': fiat 500 convertible 2012
'100': ferrari ff coupe 2012
'101': ferrari california convertible 2012
'102': ferrari 458 italia convertible 2012
'103': ferrari 458 italia coupe 2012
'104': fisker karma sedan 2012
'105': ford f-450 super duty crew cab 2012
'106': ford mustang convertible 2007
'107': ford freestar minivan 2007
'108': ford expedition el suv 2009
'109': ford edge suv 2012
'110': ford ranger supercab 2011
'111': ford gt coupe 2006
'112': ford f-150 regular cab 2012
'113': ford f-150 regular cab 2007
'114': ford focus sedan 2007
'115': ford e-series wagon van 2012
'116': ford fiesta sedan 2012
'117': gmc terrain suv 2012
'118': gmc savana van 2012
'119': gmc yukon hybrid suv 2012
'120': gmc acadia suv 2012
'121': gmc canyon extended cab 2012
'122': geo metro convertible 1993
'123': hummer h3t crew cab 2010
'124': hummer h2 sut crew cab 2009
'125': honda odyssey minivan 2012
'126': honda odyssey minivan 2007
'127': honda accord coupe 2012
'128': honda accord sedan 2012
'129': hyundai veloster hatchback 2012
'130': hyundai santa fe suv 2012
'131': hyundai tucson suv 2012
'132': hyundai veracruz suv 2012
'133': hyundai sonata hybrid sedan 2012
'134': hyundai elantra sedan 2007
'135': hyundai accent sedan 2012
'136': hyundai genesis sedan 2012
'137': hyundai sonata sedan 2012
'138': hyundai elantra touring hatchback 2012
'139': hyundai azera sedan 2012
'140': infiniti g coupe ipl 2012
'141': infiniti qx56 suv 2011
'142': isuzu ascender suv 2008
'143': jaguar xk xkr 2012
'144': jeep patriot suv 2012
'145': jeep wrangler suv 2012
'146': jeep liberty suv 2012
'147': jeep grand cherokee suv 2012
'148': jeep compass suv 2012
'149': lamborghini reventon coupe 2008
'150': lamborghini aventador coupe 2012
'151': lamborghini gallardo lp 570-4 superleggera 2012
'152': lamborghini diablo coupe 2001
'153': land rover range rover suv 2012
'154': land rover lr2 suv 2012
'155': lincoln town car sedan 2011
'156': mini cooper roadster convertible 2012
'157': maybach landaulet convertible 2012
'158': mazda tribute suv 2011
'159': mclaren mp4-12c coupe 2012
'160': mercedes-benz 300-class convertible 1993
'161': mercedes-benz c-class sedan 2012
'162': mercedes-benz sl-class coupe 2009
'163': mercedes-benz e-class sedan 2012
'164': mercedes-benz s-class sedan 2012
'165': mercedes-benz sprinter van 2012
'166': mitsubishi lancer sedan 2012
'167': nissan leaf hatchback 2012
'168': nissan nv passenger van 2012
'169': nissan juke hatchback 2012
'170': nissan 240sx coupe 1998
'171': plymouth neon coupe 1999
'172': porsche panamera sedan 2012
'173': ram c/v cargo van minivan 2012
'174': rolls-royce phantom drophead coupe convertible 2012
'175': rolls-royce ghost sedan 2012
'176': rolls-royce phantom sedan 2012
'177': scion xd hatchback 2012
'178': spyker c8 convertible 2009
'179': spyker c8 coupe 2009
'180': suzuki aerio sedan 2007
'181': suzuki kizashi sedan 2012
'182': suzuki sx4 hatchback 2012
'183': suzuki sx4 sedan 2012
'184': tesla model s sedan 2012
'185': toyota sequoia suv 2012
'186': toyota camry sedan 2012
'187': toyota corolla sedan 2012
'188': toyota 4runner suv 2012
'189': volkswagen golf hatchback 2012
'190': volkswagen golf hatchback 1991
'191': volkswagen beetle hatchback 2012
'192': volvo c30 hatchback 2012
'193': volvo 240 sedan 1993
'194': volvo xc90 suv 2007
'195': smart fortwo convertible 2012
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_stanfordcars
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 1016320238.0
num_examples: 8041
download_size: 989991348
dataset_size: 1016320238.0
---
# Dataset Card for "StanfordCars_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mozilla-foundation/common_voice_12_0 | 2023-06-26T15:23:50.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 10 | 372 | ---
pretty_name: Common Voice Corpus 12.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language_bcp47:
- ab
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nan-tw
- ne-NP
- nl
- nn-NO
- oc
- or
- pa-IN
- pl
- pt
- quy
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sr
- sv-SE
- sw
- ta
- th
- ti
- tig
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yo
- yue
- zh-CN
- zh-HK
- zh-TW
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 1M<n<10M
bg:
- 1K<n<10K
bn:
- 1M<n<10M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 1K<n<10K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 1M<n<10M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 100K<n<1M
ga-IE:
- 1K<n<10K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 10K<n<100K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
it:
- 100K<n<1M
ja:
- 100K<n<1M
ka:
- 10K<n<100K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ko:
- 1K<n<10K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lt:
- 10K<n<100K
lv:
- 10K<n<100K
mdf:
- n<1K
mhr:
- 100K<n<1M
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mrj:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
oc:
- n<1K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
quy:
- n<1K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- 1K<n<10K
sk:
- 10K<n<100K
skr:
- 1K<n<10K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
ti:
- n<1K
tig:
- n<1K
tok:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
tw:
- n<1K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yo:
- n<1K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
paperswithcode_id: common-voice
extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
---
# Dataset Card for Common Voice Corpus 12.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 26119 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17127 validated hours in 104 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_12 = load_dataset("mozilla-foundation/common_voice_12_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_12 = load_dataset("mozilla-foundation/common_voice_12_0", "hi", split="train", streaming=True)
print(next(iter(cv_12)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_12 = load_dataset("mozilla-foundation/common_voice_12_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_12), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_12, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_12 = load_dataset("mozilla-foundation/common_voice_12_0", "hi", split="train")
dataloader = DataLoader(cv_12, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 12 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_12_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
loubnabnl/code_reviews_3 | 2023-09-17T18:45:05.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 372 | ---
dataset_info:
features:
- name: bucket
dtype: string
- name: pull_request_info
struct:
- name: org.id
dtype: int64
- name: public
dtype: bool
- name: pull_request.additions
dtype: int64
- name: pull_request.base.user.type
dtype: string
- name: pull_request.body
dtype: string
- name: pull_request.changed_files
dtype: int64
- name: pull_request.closed_at
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.commits
dtype: int64
- name: pull_request.created_at
dtype: string
- name: pull_request.deletions
dtype: int64
- name: pull_request.guid
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: pull_request.id
dtype: int64
- name: pull_request.merged_at
dtype: string
- name: pull_request.merged_by.login
dtype: string
- name: pull_request.milestone.description
dtype: string
- name: pull_request.milestone.number
dtype: int64
- name: pull_request.milestone.title
dtype: string
- name: pull_request.number
dtype: int64
- name: pull_request.review_comments
dtype: int64
- name: pull_request.state
dtype: string
- name: pull_request.title
dtype: string
- name: pull_request.user.id
dtype: int64
- name: pull_request.user.login
dtype: string
- name: repo.id
dtype: int64
- name: repo.name
dtype: string
- name: head_repo_info
struct:
- name: pull_request.head.label
dtype: string
- name: pull_request.head.ref
dtype: string
- name: pull_request.head.repo.default_branch
dtype: string
- name: pull_request.head.repo.description
dtype: string
- name: pull_request.head.repo.homepage
dtype: string
- name: pull_request.head.repo.language
dtype: string
- name: pull_request.head.repo.license.name
dtype: string
- name: pull_request.head.repo.name
dtype: string
- name: pull_request.head.repo.owner.login
dtype: string
- name: pull_request.head.repo.owner.type
dtype: string
- name: pull_request.head.repo.private
dtype: bool
- name: pull_request.head.repo.stargazers_count
dtype: int64
- name: pull_request.head.sha
dtype: string
- name: pull_request.head.user.login
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: base_repo_info
struct:
- name: pull_request.base.label
dtype: string
- name: pull_request.base.ref
dtype: string
- name: pull_request.base.repo.default_branch
dtype: string
- name: pull_request.base.repo.description
dtype: string
- name: pull_request.base.repo.forks_count
dtype: int64
- name: pull_request.base.repo.homepage
dtype: string
- name: pull_request.base.repo.language
dtype: string
- name: pull_request.base.repo.license.name
dtype: string
- name: pull_request.base.repo.name
dtype: string
- name: pull_request.base.repo.open_issues_count
dtype: int64
- name: pull_request.base.repo.owner.login
dtype: string
- name: pull_request.base.repo.owner.type
dtype: string
- name: pull_request.base.repo.private
dtype: bool
- name: pull_request.base.repo.stargazers_count
dtype: int64
- name: pull_request.base.repo.watchers_count
dtype: int64
- name: pull_request.base.sha
dtype: string
- name: pull_request.base.user.login
dtype: string
- name: pull_request.base.user.type
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.label.name
dtype: 'null'
- name: pull_request.review_comments
dtype: int64
- name: events
list:
- name: action
dtype: string
- name: actor.id
dtype: int64
- name: actor.login
dtype: string
- name: comment.author_association
dtype: string
- name: comment.body
dtype: string
- name: comment.commit_id
dtype: string
- name: comment.created_at
dtype: string
- name: comment.diff_hunk
dtype: string
- name: comment.id
dtype: int64
- name: comment.in_reply_to_id
dtype: int64
- name: comment.line
dtype: int64
- name: comment.original_commit_id
dtype: string
- name: comment.original_line
dtype: int64
- name: comment.original_position
dtype: int64
- name: comment.original_start_line
dtype: int64
- name: comment.path
dtype: string
- name: comment.position
dtype: int64
- name: comment.side
dtype: string
- name: comment.start_line
dtype: int64
- name: comment.start_side
dtype: string
- name: comment.updated_at
dtype: string
- name: created_at
dtype: timestamp[us, tz=UTC]
- name: issue.author
dtype: string
- name: issue.comment
dtype: string
- name: issue.comment_id
dtype: float64
- name: pull_request.merged
dtype: bool
- name: pull_request.merged_by.login
dtype: string
- name: pull_request.merged_by.type
dtype: string
- name: pull_request.state
dtype: string
- name: review.author_association
dtype: string
- name: review.body
dtype: string
- name: review.commit_id
dtype: string
- name: review.id
dtype: int64
- name: review.state
dtype: string
- name: review.submitted_at
dtype: string
- name: type
dtype: string
- name: user.login
dtype: string
- name: user.type
dtype: string
splits:
- name: train
num_bytes: 54955618
num_examples: 10000
download_size: 16233742
dataset_size: 54955618
---
# Dataset Card for "code_reviews_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/16193144 | 2023-09-26T23:48:45.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 371 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 10
download_size: 1324
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "16193144"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/c2696aec | 2023-09-26T23:48:54.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 371 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 10
download_size: 1324
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c2696aec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/2af00e50 | 2023-09-27T02:03:14.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 370 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 206
num_examples: 10
download_size: 1404
dataset_size: 206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "2af00e50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
THUIR/T2Ranking | 2023-07-03T08:46:32.000Z | [
"task_categories:text-retrieval",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"arxiv:2304.03679",
"region:us"
] | THUIR | null | @misc{xie2023t2ranking,
title={T2Ranking: A large-scale Chinese Benchmark for Passage Ranking},
author={Xiaohui Xie and Qian Dong and Bingning Wang and Feiyang Lv and Ting Yao and Weinan Gan and Zhijing Wu and Xiangsheng Li and Haitao Li and Yiqun Liu and Jin Ma},
year={2023},
eprint={2304.03679},
archivePrefix={arXiv},
primaryClass={cs.IR}
} | null | 13 | 368 | ---
license: apache-2.0
task_categories:
- text-retrieval
language:
- zh
size_categories:
- 1M<n<10M
---
# T<sup>2</sup>Ranking
## Introduction
T<sup>2</sup>Ranking is a large-scale Chinese benchmark for passage ranking. The details about T<sup>2</sup>Ranking are elaborated in [this paper](https://arxiv.org/abs/2304.03679#).
Passage ranking are important and challenging topics for both academics and industries in the area of Information Retrieval (IR). The goal of passage ranking is to compile a search result list ordered in terms of relevance to the query from a large passage collection. Typically, Passage ranking involves two stages: passage retrieval and passage re-ranking.
To support the passage ranking research, various benchmark datasets are constructed. However, the commonly-used datasets for passage ranking usually focus on the English language. For non-English scenarios, such as Chinese, the existing datasets are limited in terms of data scale, fine-grained relevance annotation and false negative issues.
To address this problem, we introduce T<sup>2</sup>Ranking, a large-scale Chinese benchmark for passage ranking. T<sup>2</sup>Ranking comprises more than 300K queries and over 2M unique passages from real- world search engines. Specifically, we sample question-based search queries from user logs of the Sogou search engine, a popular search system in China. For each query, we extract the content of corresponding documents from different search engines. After model-based passage segmentation and clustering-based passage de-duplication, a large-scale passage corpus is obtained. For a given query and its corresponding passages, we hire expert annotators to provide 4-level relevance judgments of each query-passage pair.
<div align=center><img width="600" height="200" src="https://github.com/THUIR/T2Ranking/blob/main/pic/stat.png?raw=true"/></div>
<div align=center>Table 1: The data statistics of datasets commonly used in passage ranking. FR(SR): First (Second)- stage of passage ranking, i.e., passage Retrieval (Re-ranking).</div>
Compared with existing datasets, T<sup>2</sup>Ranking dataset has the following characteristics and advantages:
* The proposed dataset focus on the Chinese search scenario, and has advantages in data scale compared with existing Chinese passage ranking datasets, which can better support the design of deep learning algorithms
* The proposed dataset has a large number of fine-grained relevance annotations, which is helpful for mining fine-grained relationship between queries and passages and constructing more accurate ranking algorithms.
* By retrieving passage results from multiple commercial search engines and providing complete annotation, we ease the false negative problem to some extent, which is beneficial to providing more accurate evaluation.
* We design multiple strategies to ensure the high quality of our dataset, such as using a passage segment model and a passage clustering model to enhance the semantic integrity and diversity of passages and employing active learning for annotation method to improve the efficiency and quality of data annotation.
## Data Download
The whole dataset is placed in [huggingface](https://huggingface.co/datasets/THUIR/T2Ranking), and the data formats are presented in the following table.
<div class="center">
| Description| Filename|Num Records|Format|
|-------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------:|-----------------------------------:|
| Collection | collection.tsv | 2,303,643 | tsv: pid, passage |
| Queries Train | queries.train.tsv | 258,042 | tsv: qid, query |
| Queries Dev | queries.dev.tsv | 24,832 | tsv: qid, query |
| Queries Test | queries.test.tsv | 24,832 | tsv: qid, query |
| Qrels Train for re-ranking | qrels.train.tsv | 1,613,421 | TREC qrels format |
| Qrels Dev for re-ranking | qrels.dev.tsv | 400,536 | TREC qrels format |
| Qrels Retrieval Train | qrels.retrieval.train.tsv | 744,663 | tsv: qid, pid |
| Qrels Retrieval Dev | qrels.retrieval.dev.tsv | 118,933 | tsv: qid, pid |
| BM25 Negatives | train.bm25.tsv | 200,359,731 | tsv: qid, pid, index |
| Hard Negatives | train.mined.tsv | 200,376,001 | tsv: qid, pid, index, score |
</div>
You can download the dataset by running the following command:
```bash
git lfs install
git clone https://huggingface.co/datasets/THUIR/T2Ranking
```
After downloading, you can find the following files in the folder:
```
├── data
│ ├── collection.tsv
│ ├── qrels.dev.tsv
│ ├── qrels.retrieval.dev.tsv
│ ├── qrels.retrieval.train.tsv
│ ├── qrels.train.tsv
│ ├── queries.dev.tsv
│ ├── queries.test.tsv
│ ├── queries.train.tsv
│ ├── train.bm25.tsv
│ └── train.mined.tsv
├── script
│ ├── train_cross_encoder.sh
│ └── train_dual_encoder.sh
└── src
├── convert2trec.py
├── dataset_factory.py
├── modeling.py
├── msmarco_eval.py
├── train_cross_encoder.py
├── train_dual_encoder.py
└── utils.py
```
## Training and Evaluation
The dual-encoder can be trained by running the following command:
```bash
sh script/train_dual_encoder.sh
```
After training the model, you can evaluate the model by running the following command:
```bash
python src/msmarco_eval.py data/qrels.retrieval.dev.tsv output/res.top1000.step20
```
The cross-encoder can be trained by running the following command:
```bash
sh script/train_cross_encoder.sh
```
After training the model, you can evaluate the model by running the following command:
```bash
python src/convert2trec.py output/res.step-20 && python src/msmarco_eval.py data/qrels.retrieval.dev.tsv output/res.step-20.trec && path_to/trec_eval -m ndcg_cut.5 data/qrels.dev.tsv res.step-20.trec
```
BM25 on DEV set
```bash
#####################
MRR @10: 0.35894801237316354
QueriesRanked: 24831
recall@1: 0.05098711868967141
recall@1000: 0.7464097131133757
recall@50: 0.4942572226146033
#####################
```
DPR w/o hard negatives on DEV set
```bash
#####################
MRR @10: 0.4856112079562753
QueriesRanked: 24831
recall@1: 0.07367235058688999
recall@1000: 0.9082753169878586
recall@50: 0.7099350889583964
#####################
```
DPR w/ hard negatives on DEV set
```bash
#####################
MRR @10: 0.5166915171959451
QueriesRanked: 24831
recall@1: 0.08047455688965123
recall@1000: 0.9135220125786163
recall@50: 0.7327044025157232
#####################
```
BM25 retrieved+CE reranked on DEV set
```bash
#####################
MRR @10: 0.5188107959009376
QueriesRanked: 24831
recall@1: 0.08545219116806242
recall@1000: 0.7464097131133757
recall@50: 0.595298153566744
#####################
ndcg_cut_20 all 0.4405
ndcg_cut_100 all 0.4705
#####################
```
DPR retrieved+CE reranked on DEV set
```bash
#####################
MRR @10: 0.5508822816845231
QueriesRanked: 24831
recall@1: 0.08903406988867588
recall@1000: 0.9135220125786163
recall@50: 0.7393720781623112
#####################
ndcg_cut_20 all 0.5131
ndcg_cut_100 all 0.5564
#####################
```
## License
The dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
## Citation
If you use this dataset in your research, please cite our paper:
```
@misc{xie2023t2ranking,
title={T2Ranking: A large-scale Chinese Benchmark for Passage Ranking},
author={Xiaohui Xie and Qian Dong and Bingning Wang and Feiyang Lv and Ting Yao and Weinan Gan and Zhijing Wu and Xiangsheng Li and Haitao Li and Yiqun Liu and Jin Ma},
year={2023},
eprint={2304.03679},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
ybelkada/english_quotes_copy | 2023-04-04T06:13:26.000Z | [
"region:us"
] | ybelkada | null | null | null | 0 | 367 | ---
dataset_info:
features:
- name: quote
dtype: string
- name: author
dtype: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 598359
num_examples: 2508
download_size: 349107
dataset_size: 598359
---
# Dataset Card for "english_quotes_copy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/dexter | 2023-04-20T10:23:41.000Z | [
"task_categories:tabular-classification",
"language:en",
"dexter",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_dexter_168,
author = {Guyon,Isabelle, Gunn,Steve, Ben-Hur,Asa & Dror,Gideon},
title = {{Dexter}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5P898}}
} | null | 0 | 367 | ---
language:
- en
tags:
- dexter
- tabular_classification
- binary_classification
- UCI
pretty_name: Dexter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dexter
---
# Dexter
The [Dexter dataset](https://archive-beta.ics.uci.edu/dataset/168/dexter) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| dexter | Binary classification.|
|
result-kand2-sdxl-wuerst-karlo/812b079e | 2023-09-27T02:37:32.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 367 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1319
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "812b079e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
voidful/NMSQA | 2023-04-04T04:46:23.000Z | [
"task_categories:question-answering",
"task_categories:automatic-speech-recognition",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"speech-recognition",
"arxiv:2203.04911",
"region:us"
] | voidful | null | null | null | 7 | 366 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- expert-generated
- machine-generated
- crowdsourced
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
- automatic-speech-recognition
task_ids:
- abstractive-qa
pretty_name: NMSQA
tags:
- speech-recognition
---
# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Repository:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Paper:
https://arxiv.org/abs/2203.04911
- Leaderboard:
- Point of Contact:
Download audio data: [https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz](https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz)
Unzip audio data: `tar -xf nmsqa_audio.tar.gz`
### Dataset Summary
The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
### Supported Tasks and Leaderboards
The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- id: Unique identifier for the instance
- title: The title of the passage
- context: The passage text
- question: The question text
- - answer_start: The start index of the answer in the text
- audio_full_answer_end: The end position of the audio answer in seconds
- audio_full_answer_start: The start position of the audio answer in seconds
- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
- audio_segment_answer_end: The end position of the audio answer in seconds for the segment
- audio_segment_answer_start: The start position of the audio answer in seconds for the segment
- text: The answer text
- content_segment_audio_path: The audio path for the content segment
- content_full_audio_path: The complete audio path for the content
- content_audio_sampling_rate: The audio sampling rate
- content_audio_speaker: The audio speaker
- content_segment_text: The segment text of the content
- content_segment_normalized_text: The normalized text for generating audio
- question_audio_path: The audio path for the question
- question_audio_sampling_rate: The audio sampling rate
- question_audio_speaker: The audio speaker
- question_normalized_text: The normalized text for generating audio
### Data Fields
The dataset includes the following data fields:
- id
- title
- context
- question
- answers
- content_segment_audio_path
- content_full_audio_path
- content_audio_sampling_rate
- content_audio_speaker
- content_segment_text
- content_segment_normalized_text
- question_audio_path
- question_audio_sampling_rate
- question_audio_speaker
- question_normalized_text
### Data Splits
The dataset is split into train, dev, and test sets.
## Dataset Creation
### Curation Rationale
The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
### Source Data
The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
#### Initial Data Collection and Normalization
The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
#### Who are the source language producers?
The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
### Annotations
#### Annotation process
The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
#### Who are the annotators?
The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
### Discussion of Biases
The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
### Other Known Limitations
As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
## Additional Information
### Dataset Curators
The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
### Licensing Information
The licensing information for the dataset is not explicitly mentioned.
### Citation Information
```css
@article{lin2022dual,
title={DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning},
author={Lin, Guan-Ting and Chuang, Yung-Sung and Chung, Ho-Lam and Yang, Shu-wen and Chen, Hsuan-Jui and Li, Shang-Wen and Mohamed, Abdelrahman and Lee, Hung-yi and Lee, Lin-shan},
journal={arXiv preprint arXiv:2203.04911},
year={2022}
}
```
### Contributions
Thanks to [@voidful](https://github.com/voidful) for adding this dataset. |
cyberagent/crello | 2023-09-14T08:33:47.000Z | [
"task_categories:unconditional-image-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cdla-permissive-2.0",
"graphic design",
"design templates",
"arxiv:2108.01249",
"region:us"
] | cyberagent | null | null | null | 13 | 366 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: cdla-permissive-2.0
multilinguality:
- monolingual
pretty_name: crello
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- graphic design
- design templates
task_categories:
- unconditional-image-generation
task_ids: []
dataset_info:
features:
- name: id
dtype: string
- name: length
dtype: int64
- name: group
dtype:
class_label:
names:
'0': BG
'1': EO
'2': HC
'3': MM
'4': SM
'5': SMA
- name: format
dtype:
class_label:
names:
'0': Album Cover
'1': Book Cover
'2': Brochure
'3': Business card
'4': Calendar
'5': Card
'6': Certificate
'7': Coupon
'8': Email header
'9': FB event cover
'10': Facebook
'11': Facebook AD
'12': Facebook cover
'13': Flayer
'14': Gallery Image
'15': Gift Certificate
'16': Graphic
'17': IGTV Cover
'18': Image
'19': Infographic
'20': Instagram
'21': Instagram AD
'22': Instagram Highlight Cover
'23': Instagram Story
'24': Invitation
'25': Invoice
'26': Label
'27': Large Rectangle
'28': Leaderboard
'29': Letterhead
'30': LinkedIn Cover
'31': Logo
'32': Medium Rectangle
'33': Menu
'34': Mind Map
'35': Mobile Presentation
'36': Mood Board
'37': Newsletter
'38': Photo Book
'39': Pinterest
'40': Postcard
'41': Poster
'42': Poster US
'43': Presentation
'44': Presentation Wide
'45': Proposal
'46': Recipe Card
'47': Resume
'48': Schedule Planner
'49': Skyscraper
'50': Snapchat Geofilter
'51': Snapchat Moment Filter
'52': Storyboard
'53': T-Shirt
'54': Ticket
'55': Title
'56': Tumblr
'57': Twitch Offline Banner
'58': Twitch Profile Banner
'59': Twitter
'60': VK Community Cover
'61': VK Post with Button
'62': VK Universal Post
'63': Web Banner
'64': Youtube
'65': Youtube Thumbnail
'66': Zoom Background
- name: canvas_width
dtype:
class_label:
names:
'0': '1000'
'1': '1008'
'2': '1024'
'3': '1080'
'4': '1128'
'5': '1190'
'6': '1200'
'7': '1280'
'8': '1296'
'9': '1500'
'10': '1590'
'11': '160'
'12': '1600'
'13': '1920'
'14': '240'
'15': '241'
'16': '2560'
'17': '300'
'18': '3000'
'19': '336'
'20': '360'
'21': '396'
'22': '419'
'23': '420'
'24': '432'
'25': '500'
'26': '537'
'27': '540'
'28': '560'
'29': '576'
'30': '595'
'31': '600'
'32': '635'
'33': '728'
'34': '735'
'35': '792'
'36': '800'
'37': '841'
'38': '842'
'39': '851'
'40': '940'
- name: canvas_height
dtype:
class_label:
names:
'0': '1055'
'1': '1080'
'2': '1102'
'3': '1200'
'4': '1296'
'5': '141'
'6': '142'
'7': '1440'
'8': '1600'
'9': '1683'
'10': '1728'
'11': '191'
'12': '1920'
'13': '200'
'14': '2000'
'15': '216'
'16': '2340'
'17': '240'
'18': '250'
'19': '2560'
'20': '280'
'21': '288'
'22': '297'
'23': '298'
'24': '315'
'25': '320'
'26': '380'
'27': '400'
'28': '480'
'29': '500'
'30': '504'
'31': '512'
'32': '576'
'33': '595'
'34': '600'
'35': '612'
'36': '628'
'37': '654'
'38': '700'
'39': '720'
'40': '768'
'41': '788'
'42': '810'
'43': '841'
'44': '842'
'45': '90'
- name: category
dtype:
class_label:
names:
'0': all
'1': beauty
'2': businessFinance
'3': citiesPlaces
'4': educationScience
'5': fashionStyle
'6': foodDrinks
'7': handcraftArt
'8': holidaysCelebration
'9': homeStuff
'10': industry
'11': kidsParents
'12': leisureEntertainment
'13': medical
'14': natureWildlife
'15': pets
'16': realEstateBuilding
'17': religions
'18': socialActivityCharity
'19': sportExtreme
'20': technology
'21': transportation
'22': travelsVacations
- name: title
dtype: string
- name: type
sequence:
class_label:
names:
'0': coloredBackground
'1': imageElement
'2': maskElement
'3': svgElement
'4': textElement
- name: left
sequence: float32
- name: top
sequence: float32
- name: width
sequence: float32
- name: height
sequence: float32
- name: opacity
sequence: float32
- name: text
sequence: string
- name: font
sequence:
class_label:
names:
'0': ''
'1': Abril Fatface
'2': Aldrich
'3': Alef
'4': Alegreya Sans
'5': Alfa Slab One
'6': Alice
'7': Allerta Stencil
'8': Allura
'9': Amatic Sc
'10': Anton
'11': Arapey
'12': Architects Daughter
'13': Arima Madurai
'14': Arimo
'15': Arizonia
'16': Arkana Script
'17': Armata
'18': Assistant
'19': Bad Script
'20': Baloo Tamma
'21': Bangers
'22': Barrio
'23': Beacon
'24': Bebas Neue
'25': Bellefair
'26': Bentham
'27': Berkshire Swash
'28': Bilbo
'29': Black Ops One
'30': Blogger
'31': Breathe
'32': Breathe Press
'33': Brusher
'34': Brusher Free Font
'35': Bubbler One
'36': Buda
'37': Bungee
'38': Bungee Shade
'39': Cabin Sketch
'40': Caesar Dressing
'41': Cantarell
'42': Carter One
'43': Caveat
'44': Cedarville Cursive
'45': Chathura
'46': Clicker Script
'47': Comfortaa
'48': Contrail One
'49': Cookie
'50': Copse
'51': Cormorant Infant
'52': Courgette
'53': Cousine
'54': Covered By Your Grace
'55': Crete Round
'56': Cutive Mono
'57': Damion
'58': Dancing Script
'59': David Libre
'60': Dawning Of A New Day
'61': Delius
'62': Delius Swash Caps
'63': Didact Gothic
'64': Dorsa
'65': Dosis
'66': Droid Serif
'67': Dukomdesign Constantine
'68': Eb Garamond
'69': Economica
'70': El Messiri
'71': Elsie
'72': Elsie Swash Caps
'73': Euphoria Script
'74': Ewert
'75': Exo 2
'76': Farsan
'77': Faster One
'78': Fauna One
'79': Finger Paint
'80': Fjalla One
'81': Forum
'82': Frank Ruhl Libre
'83': Fredericka The Great
'84': Gabriela
'85': Gaegu
'86': Geo
'87': Gfs Didot
'88': Give You Glory
'89': Glass Antiqua
'90': Gluk Glametrix
'91': Gluk Znikomitno25
'92': Graduate
'93': Grand Hotel
'94': Gravitas One
'95': Great Vibes
'96': Gruppo
'97': Handlee
'98': Happy Monkey
'99': Heebo
'100': Homemade Apple
'101': Iceberg
'102': Iceland
'103': Im Fell
'104': Im Fell Dw Pica Sc
'105': Inconsolata
'106': Italiana
'107': Italianno
'108': Jacques Francois Shadow
'109': Josefin Sans
'110': Josefin Slab
'111': Julius Sans One
'112': Junge
'113': Jura
'114': Just Me Again Down Here
'115': Kalam
'116': Katibeh
'117': Kaushan Script
'118': Kavivanar
'119': Kelly Slab
'120': Knewave
'121': Knewave Outline
'122': Kreon
'123': Kristi
'124': Kumar One
'125': Kumar One Outline
'126': Kurale
'127': La Belle Aurore
'128': Lalezar
'129': Lato
'130': Lauren
'131': League Script
'132': Lemon Tuesday
'133': Libre Baskerville
'134': Limelight
'135': Londrina Shadow
'136': Londrina Sketch
'137': Loved By The King
'138': Lovers Quarrel
'139': Marcellus Sc
'140': Marck Script
'141': Mate
'142': Maven Pro
'143': Meddon
'144': Medula One
'145': Merienda One
'146': Merriweather
'147': Mikodacs
'148': Miriam Libre
'149': Monda
'150': Monofett
'151': Monsieur La Doulaise
'152': Montserrat
'153': Montserrat Alternates
'154': Mr Dafoe
'155': Mr De Haviland
'156': Mrs Saint Delafield
'157': Mrs Sheppards
'158': Neucha
'159': Nixie One
'160': Nothing You Could Do
'161': Noticia Text
'162': Nova Square
'163': Nunito
'164': Offside
'165': Okolaks
'166': Old Standard Tt
'167': Oleo Script
'168': Open Sans
'169': Open Sans Condensed
'170': Oranienbaum
'171': Orbitron
'172': Oswald
'173': Overlock
'174': Oxygen
'175': Pacifico
'176': Pangolin
'177': Parisienne
'178': Pathway Gothic One
'179': Patrick Hand
'180': Pattaya
'181': Patua One
'182': Permanent Marker
'183': Petit Formal Script
'184': Philosopher
'185': Pinyon Script
'186': Pirou
'187': Play
'188': Playball
'189': Playfair Display
'190': Playlist Caps
'191': Playlist Script
'192': Podkova
'193': Poiret One
'194': Pompiere
'195': Port Lligat Slab
'196': Press Start 2P
'197': Prompt
'198': Pt Sans
'199': Quattrocento
'200': Quicksand
'201': Racing Sans One
'202': Radley
'203': Rakkas
'204': Raleway
'205': Raleway Dots
'206': Rammetto One
'207': Rationale
'208': Reem Kufi
'209': Reenie Beanie
'210': Righteous
'211': Rise
'212': Rissa Typeface
'213': Roboto
'214': Rochester
'215': Rock Salt
'216': Rokkitt
'217': Rosario
'218': Rubik
'219': Rubik One
'220': Ruslan Display
'221': Russo One
'222': Rye
'223': Sacramento
'224': Sansita One
'225': Satisfy
'226': Scope One
'227': Secular One
'228': Selima Script
'229': Sensei
'230': Seymour One
'231': Shadows Into Light Two
'232': Share Tech Mono
'233': Sirin Stencil
'234': Six Caps
'235': Source Serif Pro
'236': Space Mono
'237': Stalemate
'238': Stint Ultra Expanded
'239': Sue Ellen Francisco
'240': Suez One
'241': Sunday
'242': Superclarendon Regular
'243': Text Me One
'244': Tinos
'245': Titillium Web
'246': Tulpen One
'247': Underdog
'248': V T323
'249': Vampiro One
'250': Varela Round
'251': Vast Shadow
'252': Vollkorn
'253': Waiting For The Sunrise
'254': Wire One
'255': Yanone Kaffeesatz
'256': Yellowtail
'257': Yeseva One
'258': Yesteryear
'259': Zeyada
'260': Znikomit
'261': Znikomitno24
- name: font_size
sequence: float32
- name: text_align
sequence:
class_label:
names:
'0': ''
'1': center
'2': left
'3': right
- name: angle
sequence: float32
- name: capitalize
sequence:
class_label:
names:
'0': 'false'
'1': 'true'
- name: line_height
sequence: float32
- name: letter_spacing
sequence: float32
- name: suitability
sequence:
class_label:
names:
'0': mobile
- name: keywords
sequence: string
- name: industries
sequence:
class_label:
names:
'0': artCrafts
'1': beautyCosmetics
'2': businessFinance
'3': corporate
'4': ecologyNature
'5': educationTraining
'6': entertainmentLeisure
'7': familyKids
'8': fashionStyle
'9': foodBeverages
'10': healthWellness
'11': homeLiving
'12': hrRecruitment
'13': marketingAds
'14': nonProfitCharity
'15': petsAnimals
'16': realEstateConstruction
'17': religionFaith
'18': retail
'19': services
'20': sportFitness
'21': techGadgets
'22': transportDelivery
'23': travelTourism
- name: color
sequence:
sequence: float32
length: 3
- name: image
sequence: image
splits:
- name: train
num_bytes: 3322744283.141
num_examples: 18659
- name: test
num_bytes: 421990602.771
num_examples: 2371
- name: validation
num_bytes: 425905823.995
num_examples: 2391
download_size: 4130251706
dataset_size: 4170640709.9069996
---
# Dataset Card for Crello
## Table of Contents
- [Dataset Card for Crello](#dataset-card-for-crello)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CanvasVAE github](https://github.com/CyberAgentAILab/canvas-vae)
- **Repository:**
- **Paper:** [CanvasVAE: Learning to Generate Vector Graphic Documents](https://arxiv.org/abs/2108.01249)
- **Leaderboard:**
- **Point of Contact:** [Kota Yamaguchi](https://github.com/kyamagu)
### Dataset Summary
The Crello dataset is compiled for the study of vector graphic documents. The dataset contains document meta-data such as canvas size and pre-rendered elements such as images or text boxes. The original templates were collected from [crello.com](https://crello.com) (now [create.vista.com](https://create.vista.com/)) and converted to a low-resolution format suitable for machine learning analysis.
### Supported Tasks and Leaderboards
[CanvasVAE](https://arxiv.org/abs/2108.01249) studies unsupervised document generation.
### Languages
Almost all design templates use English.
## Dataset Structure
### Data Instances
Each instance has scalar attributes (canvas) and sequence attributes (elements). Categorical values are stored as integer values. Check `ClassLabel` features of the dataset for the list of categorical labels.
```
{'id': '592d6c2c95a7a863ddcda140',
'length': 8,
'group': 4,
'format': 20,
'canvas_width': 3,
'canvas_height': 1,
'category': 0,
'title': 'Beauty Blog Ad Woman with Unusual Hairstyle',
'type': [1, 3, 3, 3, 3, 4, 4, 4],
'left': [0.0,
-0.0009259259095415473,
0.24444444477558136,
0.5712962746620178,
0.2657407522201538,
0.369228333234787,
0.2739444375038147,
0.44776931405067444],
'top': [0.0,
-0.0009259259095415473,
0.37037035822868347,
0.41296297311782837,
0.41296297311782837,
0.8946287035942078,
0.4549448788166046,
0.40591198205947876],
'width': [1.0,
1.0018517971038818,
0.510185182094574,
0.16296295821666718,
0.16296295821666718,
0.30000001192092896,
0.4990740716457367,
0.11388888955116272],
'height': [1.0,
1.0018517971038818,
0.25833332538604736,
0.004629629664123058,
0.004629629664123058,
0.016611294820904732,
0.12458471953868866,
0.02657807245850563],
'opacity': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
'text': ['', '', '', '', '', 'STAY WITH US', 'FOLLOW', 'PRESS'],
'font': [0, 0, 0, 0, 0, 152, 172, 152],
'font_size': [0.0, 0.0, 0.0, 0.0, 0.0, 18.0, 135.0, 30.0],
'text_align': [0, 0, 0, 0, 0, 2, 2, 2],
'angle': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
'capitalize': [0, 0, 0, 0, 0, 0, 0, 0],
'line_height': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
'letter_spacing': [0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 12.55813980102539, 3.0],
'suitability': [0],
'keywords': ['beautiful',
'beauty',
'blog',
'blogging',
'caucasian',
'cute',
'elegance',
'elegant',
'fashion',
'fashionable',
'femininity',
'glamour',
'hairstyle',
'luxury',
'model',
'stylish',
'vogue',
'website',
'woman',
'post',
'instagram',
'ig',
'insta',
'fashion',
'purple'],
'industries': [1, 8, 13],
'color': [[153.0, 118.0, 96.0],
[34.0, 23.0, 61.0],
[34.0, 23.0, 61.0],
[255.0, 255.0, 255.0],
[255.0, 255.0, 255.0],
[255.0, 255.0, 255.0],
[255.0, 255.0, 255.0],
[255.0, 255.0, 255.0]],
'image': [<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>,
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=256x256>]}
```
To get a label for categorical values, use the `int2str` method:
```python
key = "font"
example = dataset[0]
dataset.features[key].int2str(example[key])
```
### Data Fields
In the following, categorical fields are shown as `categorical` type, but the actual storage is `int64`.
**Canvas attributes**
| Field | Type | Shape | Description |
| ------------- | ----------- | ------- | --------------------------------------------------------------- |
| id | string | () | Template ID from crello.com |
| group | categorical | () | Broad design groups, such as social media posts or blog headers |
| format | categorical | () | Detailed design formats, such as Instagram post or postcard |
| category | categorical | () | Topic category of the design, such as holiday celebration |
| canvas_width | categorical | () | Canvas pixel width |
| canvas_height | categorical | () | Canvas pixel height |
| length | int64 | () | Length of elements |
| suitability | categorical | (None,) | List of display tags, only `mobile` tag exists |
| keywords | string | (None,) | List of keywords associated to this template |
| industries | categorical | (None,) | List of industry tags like `marketingAds` |
**Element attributes**
| Field | Type | Shape | Description |
| -------------- | ----------- | --------- | -------------------------------------------------------------------- |
| type | categorical | (None,) | Element type, such as vector shape, image, or text |
| left | float32 | (None,) | Element left position normalized to [0, 1] range w.r.t. canvas_width |
| top | float32 | (None,) | Element top position normalized to [0, 1] range w.r.t. canvas_height |
| width | float32 | (None,) | Element width normalized to [0, 1] range w.r.t. canvas_width |
| height | float32 | (None,) | Element height normalized to [0, 1] range w.r.t. canvas_height |
| color | int64 | (None, 3) | Extracted main RGB color of the element |
| opacity | float32 | (None,) | Opacity in [0, 1] range |
| image | image | (None,) | Pre-rendered 256x256 preview of the element encoded in PNG format |
| text | string | (None,) | Text content in UTF-8 encoding for text element |
| font | categorical | (None,) | Font family name for text element |
| font_size | float32 | (None,) | Font size (height) in pixels |
| text_align | categorical | (None,) | Horizontal text alignment, left, center, right for text element |
| angle | float32 | (None,) | Element rotation angle (radian) w.r.t. the center of the element |
| capitalize | categorical | (None,) | Binary flag to capitalize letters |
| line_height | float32 | (None,) | Scaling parameter to line height, default is 1.0 |
| letter_spacing | float32 | (None,) | Adjustment parameter for letter spacing, default is 0.0 |
Note that the color and pre-rendered images do not necessarily accurately reproduce the original design templates. The original template is accessible at the following URL if still available.
```
https://create.vista.com/artboard/?template=<template_id>
```
`left` and `top` can be negative because elements can be bigger than the canvas size.
### Data Splits
The Crello dataset has 3 splits: train, validation, and test. The current split is generated such that the same title of the original template shows up in only in one split.
| Split | Count |
| --------- | ----- |
| train | 18659 |
| validaton | 2391 |
| test | 2371 |
### Visualization
Each example can be visualized in the following approach using [`skia-python`](https://kyamagu.github.io/skia-python/). Note the following does not guarantee a similar appearance to the original template. Currently, the quality of text rendering is far from perfect.
```python
import io
from typing import Any, Dict
import numpy as np
import skia
def render(features: datasets.Features, example: Dict[str, Any], max_size: float=512.) -> bytes:
"""Render parsed sequence example onto an image and return as PNG bytes."""
canvas_width = int(features["canvas_width"].int2str(example["canvas_width"]))
canvas_height = int(features["canvas_height"].int2str(example["canvas_height"]))
scale = min(1.0, max_size / canvas_width, max_size / canvas_height)
surface = skia.Surface(int(scale * canvas_width), int(scale * canvas_height))
with surface as canvas:
canvas.scale(scale, scale)
for index in range(example["length"]):
pil_image = example["image"][index]
image = skia.Image.frombytes(
pil_image.convert('RGBA').tobytes(),
pil_image.size,
skia.kRGBA_8888_ColorType)
left = example["left"][index] * canvas_width
top = example["top"][index] * canvas_height
width = example["width"][index] * canvas_width
height = example["height"][index] * canvas_height
rect = skia.Rect.MakeXYWH(left, top, width, height)
paint = skia.Paint(Alphaf=example["opacity"][index], AntiAlias=True)
angle = example["angle"][index]
with skia.AutoCanvasRestore(canvas):
if angle != 0:
degree = 180. * angle / np.pi
canvas.rotate(degree, left + width / 2., top + height / 2.)
canvas.drawImageRect(image, rect, paint=paint)
image = surface.makeImageSnapshot()
with io.BytesIO() as f:
image.save(f, skia.kPNG)
return f.getvalue()
```
## Dataset Creation
### Curation Rationale
The Crello dataset is compiled for the general study of vector graphic documents, with the goal of producing a dataset that offers complete vector graphic information suitable for neural methodologies.
### Source Data
#### Initial Data Collection and Normalization
The dataset is initially scraped from the former `crello.com` and pre-processed to the above format.
#### Who are the source language producers?
While [create.vista.com](https://create.vista.com/) owns those templates, the templates seem to be originally created by a specific group of design studios.
### Personal and Sensitive Information
The dataset does not contain any personal information about the creator but may contain a picture of people in the design template.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed for advancing the general study of vector graphic documents, especially for generative systems of graphic design. Successful utilization might enable the automation of creative workflow that human designers get involved in.
### Discussion of Biases
The templates contained in the dataset reflect the biases appearing in the source data, which could present gender biases in specific design categories.
### Other Known Limitations
Due to the unknown data specification of the source data, the color and pre-rendered images do not necessarily accurately reproduce the original design templates. The original template is accessible at the following URL if still available.
https://create.vista.com/artboard/?template=<template_id>
## Additional Information
### Dataset Curators
The Crello dataset was developed by [Kota Yamaguchi](https://github.com/kyamagu).
### Licensing Information
The origin of the dataset is [create.vista.com](https://create.vista.com) (formally, `crello.com`).
The distributor ("We") do not own the copyrights of the original design templates.
By using the Crello dataset, the user of this dataset ("You") must agree to the
[VistaCreate License Agreements](https://create.vista.com/faq/legal/licensing/license_agreements/).
The dataset is distributed under [CDLA-Permissive-2.0 license](https://cdla.dev/permissive-2-0/).
**Note**
We do not re-distribute the original files as we are not allowed by terms.
### Citation Information
@article{yamaguchi2021canvasvae,
title={CanvasVAE: Learning to Generate Vector Graphic Documents},
author={Yamaguchi, Kota},
journal={ICCV},
year={2021}
}
### Releases
3.1: bugfix release (Feb 16, 2023)
- Fix a bug that ignores newline characters in some of the texts
3.0: v3 release (Feb 13, 2023)
- Migrate to Hugging Face Hub.
- Fix various text rendering bugs.
- Change split generation criteria for avoiding near-duplicates: no compatibility with v2 splits.
- Incorporate a motion picture thumbnail in templates.
- Add `title`, `keywords`, `suitability`, and `industries` canvas attributes.
- Add `capitalize`, `line_height`, and `letter_spacing` element attributes.
2.0: v2 release (May 26, 2022)
- Add `text`, `font`, `font_size`, `text_align`, and `angle` element attributes.
- Include rendered text element in `image_bytes`.
1.0: v1 release (Aug 24, 2021)
### Contributions
Thanks to [@kyamagu](https://github.com/kyamagu) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/043bd500 | 2023-09-27T02:37:30.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 366 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1319
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "043bd500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kor_nlu | 2023-01-25T14:33:57.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|snli",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2004.03289",
"region:us"
] | null | The dataset contains data for bechmarking korean models on NLI and STS | null | null | 1 | 365 | ---
annotations_creators:
- found
language_creators:
- expert-generated
- found
- machine-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|snli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
pretty_name: KorNlu
dataset_info:
- config_name: nli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 80135707
num_examples: 550146
- name: validation
num_bytes: 318170
num_examples: 1570
- name: test
num_bytes: 1047250
num_examples: 4954
download_size: 80030037
dataset_size: 81501127
- config_name: sts
features:
- name: genre
dtype:
class_label:
names:
'0': main-news
'1': main-captions
'2': main-forum
'3': main-forums
- name: filename
dtype:
class_label:
names:
'0': images
'1': MSRpar
'2': MSRvid
'3': headlines
'4': deft-forum
'5': deft-news
'6': track5.en-en
'7': answers-forums
'8': answer-answer
- name: year
dtype:
class_label:
names:
'0': '2017'
'1': '2016'
'2': '2013'
'3': 2012train
'4': '2014'
'5': '2015'
'6': 2012test
- name: id
dtype: int32
- name: score
dtype: float32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: train
num_bytes: 1056664
num_examples: 5703
- name: validation
num_bytes: 305009
num_examples: 1471
- name: test
num_bytes: 249671
num_examples: 1379
download_size: 1603824
dataset_size: 1611344
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/kakaobrain/KorNLUDatasets)
- **Repository:** [Github](https://github.com/kakaobrain/KorNLUDatasets)
- **Paper:** [Arxiv](https://arxiv.org/abs/2004.03289)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
garcianacho/human_genome_csv | 2023-10-04T12:41:28.000Z | [
"task_categories:token-classification",
"license:apache-2.0",
"biology",
"genome",
"human genome",
"bioinformatics",
"region:us"
] | garcianacho | null | null | null | 0 | 364 | ---
license: apache-2.0
task_categories:
- token-classification
tags:
- biology
- genome
- human genome
- bioinformatics
---
## Human Genome Dataset
Here is a human genome ready to be used to train LLM.
|
nulltella/bbc-articles-finetuning-classif | 2023-09-28T18:19:59.000Z | [
"region:us"
] | nulltella | null | null | null | 0 | 364 | Entry not found |
result-kand2-sdxl-wuerst-karlo/cc93a78b | 2023-09-27T06:17:25.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 364 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 239
num_examples: 10
download_size: 1420
dataset_size: 239
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cc93a78b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/c9242d3b | 2023-09-27T04:27:02.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 363 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 220
num_examples: 10
download_size: 1395
dataset_size: 220
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c9242d3b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BeIR/nfcorpus-qrels | 2022-10-23T06:05:32.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 362 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
Francesco/construction-safety-gsnvb | 2023-03-30T09:11:51.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 2 | 362 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': construction-safety
'1': helmet
'2': no-helmet
'3': no-vest
'4': person
'5': vest
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: construction-safety-gsnvb
tags:
- rf100
---
# Dataset Card for construction-safety-gsnvb
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/construction-safety-gsnvb
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
construction-safety-gsnvb
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/construction-safety-gsnvb
### Citation Information
```
@misc{ construction-safety-gsnvb,
title = { construction safety gsnvb Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/construction-safety-gsnvb } },
url = { https://universe.roboflow.com/object-detection/construction-safety-gsnvb },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
DFKI-SLT/scidtb_argmin | 2023-08-08T12:46:04.000Z | [
"region:us"
] | DFKI-SLT | null | @inproceedings{accuosto-saggion-2019-transferring,
title = "Transferring Knowledge from Discourse to Arguments: A Case Study with Scientific Abstracts",
author = "Accuosto, Pablo and
Saggion, Horacio",
booktitle = "Proceedings of the 6th Workshop on Argument Mining",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-4505",
doi = "10.18653/v1/W19-4505",
pages = "41--51",
abstract = "In this work we propose to leverage resources available with discourse-level annotations to facilitate the identification of argumentative components and relations in scientific texts, which has been recognized as a particularly challenging task. In particular, we implement and evaluate a transfer learning approach in which contextualized representations learned from discourse parsing tasks are used as input of argument mining models. As a pilot application, we explore the feasibility of using automatically identified argumentative components and relations to predict the acceptance of papers in computer science venues. In order to conduct our experiments, we propose an annotation scheme for argumentative units and relations and use it to enrich an existing corpus with an argumentation layer.",
} | null | 0 | 360 | Entry not found |
Blablablab/SOCKET | 2023-10-10T20:51:48.000Z | [
"license:cc-by-4.0",
"arxiv:2305.14938",
"region:us"
] | Blablablab | A unified evaluation benchmark dataset for evaludating socialbility of NLP models. | @misc{choi2023llms,
title={Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark},
author={Minje Choi and Jiaxin Pei and Sagar Kumar and Chang Shu and David Jurgens},
year={2023},
eprint={2305.14938},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 2 | 359 | ---
license: cc-by-4.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/minjechoi/SOCKET
- **Paper: Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark [link](https://arxiv.org/abs/2305.14938)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This Dataset contains the tasks used in the paper "Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark" [link](https://arxiv.org/abs/2305.14938).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
This benchmark is created by aggregating several existing NLP datasets that measure different aspects of social information.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@misc{choi2023llms,
title={Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark},
author={Minje Choi and Jiaxin Pei and Sagar Kumar and Chang Shu and David Jurgens},
year={2023},
eprint={2305.14938},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
### Contributions
[More Information Needed] |
osunlp/MagicBrush | 2023-08-06T02:50:19.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"arxiv:2306.10012",
"region:us"
] | osunlp | null | null | null | 29 | 359 | ---
license: cc-by-4.0
dataset_info:
features:
- name: img_id
dtype: string
- name: turn_index
dtype: int32
- name: source_img
dtype: image
- name: mask_img
dtype: image
- name: instruction
dtype: string
- name: target_img
dtype: image
splits:
- name: train
num_bytes: 25446150928.986
num_examples: 8807
- name: dev
num_bytes: 1521183444
num_examples: 528
download_size: 22358540292
dataset_size: 26967334372.986
task_categories:
- text-to-image
- image-to-image
language:
- en
pretty_name: MagicBrush
size_categories:
- 10K<n<100K
---
# Dataset Card for MagicBrush
## Dataset Description
- **Homepage:** https://osu-nlp-group.github.io/MagicBrush
- **Repository:** https://github.com/OSU-NLP-Group/MagicBrush
- **Point of Contact:** [Kai Zhang](mailto:zhang.13253@osu.edu)
### Dataset Summary
MagicBrush is the first large-scale, manually-annotated instruction-guided image editing dataset covering diverse scenarios single-turn, multi-turn, mask-provided, and mask-free editing. MagicBrush comprises 10K (source image, instruction, target image) triples, which is sufficient to train large-scale image editing models.
Please check our [website](https://osu-nlp-group.github.io/MagicBrush/) to explore more visual results.
#### Dataset Structure
"img_id" (str): same from COCO id but in string type, for easier test set loading
"turn_index" (int32): the edit turn in the image
"source_img" (str): input image, could be the original real image (turn_index=1) and edited images from last turn (turn_index >=2)
"mask_img" (str): free-form mask image (white region), can be used in mask-provided setting to limit the region to be edited.
"instruction" (str): edit instruction of how the input image should be changed.
"target_img" (str): the edited image corresponding to the input image and instruction.
If you need auxiliary data, please use [training set](https://buckeyemailosu-my.sharepoint.com/:u:/g/personal/zhang_13253_buckeyemail_osu_edu/EYEqf_yG36lAgiXw2GvRl0QBDBOeZHxvNgxO0Ec9WDMcNg) and [dev set](https://buckeyemailosu-my.sharepoint.com/:u:/g/personal/zhang_13253_buckeyemail_osu_edu/EXkXvvC95C1JsgMNWGL_RcEBElmsGxXwAAAdGamN8PNhrg)
### Splits
train: 8,807 edit turns (4,512 edit sessions).
dev: 528 edit turns (266 edit sessions).
test: (To prevent potential data leakage, please check our repo for information on obtaining the test set.)
### Licensing Information
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{Zhang2023MagicBrush,
title={MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing},
author={Kai Zhang and Lingbo Mo and Wenhu Chen and Huan Sun and Yu Su},
year={2023},
eprint={2306.10012},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
starmpcc/Asclepius-Synthetic-Clinical-Notes | 2023-09-04T01:27:17.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"medical",
"arxiv:2309.00237",
"region:us"
] | starmpcc | null | null | null | 11 | 359 | ---
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
- summarization
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: 'Asclepius: Synthetic Clincal Notes & Instruction Dataset'
size_categories:
- 100K<n<1M
---
# Asclepius: Synthetic Clincal Notes & Instruction Dataset
## Dataset Description
- **Repository:**
- [Github](https://github.com/starmpcc/Asclepius)
- **Paper:**
- https://arxiv.org/abs/2309.00237
- **MODEL:**
- https://huggingface.co/starmpcc/Asclepius-13B
- https://huggingface.co/starmpcc/Asclepius-7B
### Dataset Summary
This dataset is official dataset for Asclepius [(arxiv)](https://arxiv.org/abs/2309.00237)
This dataset is composed with Clinical Note - Question - Answer format to build a clinical LLMs.
- We first synthesized synthetic notes from [PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients) case reports with GPT-3.5
- Then, we generate instruction-answer pairs for 157k synthetic discharge summaries
### Supported Tasks and Leaderboards
- This dataset covers below 8 tasks
- Named Entity Recognition
- Abbreviation Expansion
- Relation Extraction
- Temporal Information Extraction
- Coreference Resolution
- Paraphrasing
- Summarization
- Question Answering
### Languages
English
## Dataset Structure
### Data Instances
- `synthetic.csv`
- Clinical Note - Question - Answer pairs
### Data Fields
- `patient_id`: Unique case report id from PMC-Patients
- `patient`: Case report text
- `question`: GPT-3.5 generated instruction from patient. The used prompt can be checked on github.
- `answer`: GPT-3.5 generated answer for given case report and question
- `task`: Corresponding category of question. One of above listsed
## Dataset Creation
### Source Data
[PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients)
### Annotations
We used GPT-3.5-turbo (version 0314).
You can check the prompts on our github.
## Additional Information
### Licensing Information
CC-BY-NC-SA 4.0
### Citation Information
@misc{kweon2023publicly,
title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
year={2023},
eprint={2309.00237},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
shariqfarooq/USYllmblue | 2023-09-27T00:10:28.000Z | [
"region:us"
] | shariqfarooq | null | null | null | 0 | 359 | ---
dataset_info:
features:
- name: gligen
dtype: image
- name: layoutgpt
dtype: image
- name: llmgrounded
dtype: image
- name: ours
dtype: image
- name: stablediffusion
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 96699641.0
num_examples: 44
download_size: 96703577
dataset_size: 96699641.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "USYllmblue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TigerResearch/tigerbot-alpaca-en-50k | 2023-05-31T01:56:04.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 0 | 356 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 自有基于alpaca生成英文问答对
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-alpaca-en-50k')
``` |
dangne/processed-wikipedia-20220301.simple | 2022-08-02T16:41:20.000Z | [
"region:us"
] | dangne | null | null | null | 0 | 355 | Entry not found |
amitness/maltese-news-classification | 2023-09-28T10:34:14.000Z | [
"language:mt",
"region:us"
] | amitness | null | null | null | 0 | 355 | ---
language: mt
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: base_url
dtype: string
- name: text
dtype: string
- name: Court
dtype: int64
- name: Covid
dtype: int64
- name: Culture
dtype: int64
- name: EU
dtype: int64
- name: Economy
dtype: int64
- name: Education
dtype: int64
- name: Entertainment
dtype: int64
- name: Environment
dtype: int64
- name: Health
dtype: int64
- name: Immigration
dtype: int64
- name: International
dtype: int64
- name: Opinion
dtype: int64
- name: Politics
dtype: int64
- name: Religion
dtype: int64
- name: Social
dtype: int64
- name: Sports
dtype: int64
- name: Transport
dtype: int64
splits:
- name: train
num_bytes: 21007366
num_examples: 10783
- name: validation
num_bytes: 4716179
num_examples: 2296
- name: test
num_bytes: 4703075
num_examples: 2295
download_size: 16628687
dataset_size: 30426620
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "maltese-news-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
robertmyers/targon | 2023-09-21T22:04:29.000Z | [
"region:us"
] | robertmyers | null | null | null | 0 | 355 | Entry not found |
numer_sense | 2022-11-18T21:34:07.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:slot-filling",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:mit",
"arxiv:2005.00683",
"region:us"
] | null | NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes.
We propose to study whether numerical commonsense knowledge can be induced from pre-trained language models like BERT, and to what extent this access to knowledge robust against adversarial examples is. We hope this will be beneficial for tasks such as knowledge base completion and open-domain question answering. | @inproceedings{lin2020numersense,
title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren},
booktitle={Proceedings of EMNLP},
year={2020},
note={to appear}
} | null | 1 | 354 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-generation
- fill-mask
task_ids:
- slot-filling
paperswithcode_id: numersense
pretty_name: NumerSense
dataset_info:
features:
- name: sentence
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 825865
num_examples: 10444
- name: test_core
num_bytes: 62652
num_examples: 1132
- name: test_all
num_bytes: 184180
num_examples: 3146
download_size: 985463
dataset_size: 1072697
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://inklab.usc.edu/NumerSense/
- **Repository:** https://github.com/INK-USC/NumerSense
- **Paper:** https://arxiv.org/abs/2005.00683
- **Leaderboard:** https://inklab.usc.edu/NumerSense/#exp
- **Point of Contact:** Author emails listed in [paper](https://arxiv.org/abs/2005.00683)
### Dataset Summary
NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145
masked-word-prediction probes. The general idea is to mask numbers between 0-10 in sentences mined from a commonsense
corpus and evaluate whether a language model can correctly predict the masked value.
### Supported Tasks and Leaderboards
The dataset supports the task of slot-filling, specifically as an evaluation of numerical common sense. A leaderboard
is included on the [dataset webpage](https://inklab.usc.edu/NumerSense/#exp) with included benchmarks for GPT-2,
RoBERTa, BERT, and human performance. Leaderboards are included for both the core set and the adversarial set
discussed below.
### Languages
This dataset is in English.
## Dataset Structure
### Data Instances
Each instance consists of a sentence with a masked numerical value between 0-10 and (in the train set) a target.
Example from the training set:
```
sentence: Black bears are about <mask> metres tall.
target: two
```
### Data Fields
Each value of the training set consists of:
- `sentence`: The sentence with a number masked out with the `<mask>` token.
- `target`: The ground truth target value. Since the test sets do not include the ground truth, the `target` field
values are empty strings in the `test_core` and `test_all` splits.
### Data Splits
The dataset includes the following pre-defined data splits:
- A train set with >10K labeled examples (i.e. containing a ground truth value)
- A core test set (`test_core`) with 1,132 examples (no ground truth provided)
- An expanded test set (`test_all`) encompassing `test_core` with the addition of adversarial examples for a total of
3,146 examples. See section 2.2 of [the paper] for a discussion of how these examples are constructed.
## Dataset Creation
### Curation Rationale
The purpose of this dataset is "to study whether PTLMs capture numerical commonsense knowledge, i.e., commonsense
knowledge that provides an understanding of the numeric relation between entities." This work is motivated by the
prior research exploring whether language models possess _commonsense knowledge_.
### Source Data
#### Initial Data Collection and Normalization
The dataset is an extension of the [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense)
corpus. A query was performed to discover sentences containing numbers between 0-12, after which the resulting
sentences were manually evaluated for inaccuracies, typos, and the expression of commonsense knowledge. The numerical
values were then masked.
#### Who are the source language producers?
The [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense) corpus, from which this dataset
is sourced, is a crowdsourced dataset maintained by the MIT Media Lab.
### Annotations
#### Annotation process
No annotations are present in this dataset beyond the `target` values automatically sourced from the masked
sentences, as discussed above.
#### Who are the annotators?
The curation and inspection was done in two rounds by graduate students.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The motivation of measuring a model's ability to associate numerical values with real-world concepts appears
relatively innocuous. However, as discussed in the following section, the source dataset may well have biases encoded
from crowdworkers, particularly in terms of factoid coverage. A model's ability to perform well on this benchmark
should therefore not be considered evidence that it is more unbiased or objective than a human performing similar
tasks.
[More Information Needed]
### Discussion of Biases
This dataset is sourced from a crowdsourced commonsense knowledge base. While the information contained in the graph
is generally considered to be of high quality, the coverage is considered to very low as a representation of all
possible commonsense knowledge. The representation of certain factoids may also be skewed by the demographics of the
crowdworkers. As one possible example, the term "homophobia" is connected with "Islam" in the ConceptNet knowledge
base, but not with any other religion or group, possibly due to the biases of crowdworkers contributing to the
project.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was collected by Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren, Computer Science researchers
at the at the University of Southern California.
### Licensing Information
The data is hosted in a GitHub repositor with the
[MIT License](https://github.com/INK-USC/NumerSense/blob/main/LICENSE).
### Citation Information
```
@inproceedings{lin2020numersense,
title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren},
booktitle={Proceedings of EMNLP},
year={2020},
note={to appear}
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
selqa | 2023-01-25T14:43:46.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1606.00851",
"region:us"
] | null | The SelQA dataset provides crowdsourced annotation for two selection-based question answer tasks,
answer sentence selection and answer triggering. | @InProceedings{7814688,
author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},
booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)},
title={SelQA: A New Benchmark for Selection-Based Question Answering},
year={2016},
volume={},
number={},
pages={820-827},
doi={10.1109/ICTAI.2016.0128}
} | null | 0 | 354 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: selqa
pretty_name: SelQA
dataset_info:
- config_name: answer_selection_analysis
features:
- name: section
dtype: string
- name: question
dtype: string
- name: article
dtype: string
- name: is_paraphrase
dtype: bool
- name: topic
dtype:
class_label:
names:
'0': MUSIC
'1': TV
'2': TRAVEL
'3': ART
'4': SPORT
'5': COUNTRY
'6': MOVIES
'7': HISTORICAL EVENTS
'8': SCIENCE
'9': FOOD
- name: answers
sequence: int32
- name: candidates
sequence: string
- name: q_types
sequence:
class_label:
names:
'0': what
'1': why
'2': when
'3': who
'4': where
'5': how
'6': ''
splits:
- name: train
num_bytes: 9676758
num_examples: 5529
- name: test
num_bytes: 2798537
num_examples: 1590
- name: validation
num_bytes: 1378407
num_examples: 785
download_size: 14773444
dataset_size: 13853702
- config_name: answer_selection_experiments
features:
- name: question
dtype: string
- name: candidate
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 13782826
num_examples: 66438
- name: test
num_bytes: 4008077
num_examples: 19435
- name: validation
num_bytes: 1954877
num_examples: 9377
download_size: 18602700
dataset_size: 19745780
- config_name: answer_triggering_analysis
features:
- name: section
dtype: string
- name: question
dtype: string
- name: article
dtype: string
- name: is_paraphrase
dtype: bool
- name: topic
dtype:
class_label:
names:
'0': MUSIC
'1': TV
'2': TRAVEL
'3': ART
'4': SPORT
'5': COUNTRY
'6': MOVIES
'7': HISTORICAL EVENTS
'8': SCIENCE
'9': FOOD
- name: q_types
sequence:
class_label:
names:
'0': what
'1': why
'2': when
'3': who
'4': where
'5': how
'6': ''
- name: candidate_list
sequence:
- name: article
dtype: string
- name: section
dtype: string
- name: candidates
sequence: string
- name: answers
sequence: int32
splits:
- name: train
num_bytes: 30176650
num_examples: 5529
- name: test
num_bytes: 8766787
num_examples: 1590
- name: validation
num_bytes: 4270904
num_examples: 785
download_size: 46149676
dataset_size: 43214341
- config_name: answer_triggering_experiments
features:
- name: question
dtype: string
- name: candidate
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 42956518
num_examples: 205075
- name: test
num_bytes: 12504961
num_examples: 59845
- name: validation
num_bytes: 6055616
num_examples: 28798
download_size: 57992239
dataset_size: 61517095
---
# Dataset Card for SelQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/emorynlp/selqa
- **Repository:** https://github.com/emorynlp/selqa
- **Paper:** https://arxiv.org/abs/1606.00851
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Tomasz Jurczyk <http://tomaszjurczyk.com/>, Jinho D. Choi <http://www.mathcs.emory.edu/~choi/home.html>
### Dataset Summary
SelQA: A New Benchmark for Selection-Based Question Answering
### Supported Tasks and Leaderboards
Question Answering
### Languages
English
## Dataset Structure
### Data Instances
An example from the `answer selection` set:
```
{
"section": "Museums",
"question": "Where are Rockefeller Museum and LA Mayer Institute for Islamic Art?",
"article": "Israel",
"is_paraphrase": true,
"topic": "COUNTRY",
"answers": [
5
],
"candidates": [
"The Israel Museum in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with an extensive collection of Judaica and European art.",
"Israel's national Holocaust museum, Yad Vashem, is the world central archive of Holocaust-related information.",
"Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv University, is an interactive museum devoted to the history of Jewish communities around the world.",
"Apart from the major museums in large cities, there are high-quality artspaces in many towns and \"kibbutzim\".",
"\"Mishkan Le'Omanut\" on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country.",
"Several Israeli museums are devoted to Islamic culture, including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art, both in Jerusalem.",
"The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history.",
"It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man.",
"A cast of the skull is on display at the Israel Museum."
],
"q_types": [
"where"
]
}
```
An example from the `answer triggering` set:
```
{
"section": "Museums",
"question": "Where are Rockefeller Museum and LA Mayer Institute for Islamic Art?",
"article": "Israel",
"is_paraphrase": true,
"topic": "COUNTRY",
"candidate_list": [
{
"article": "List of places in Jerusalem",
"section": "List_of_places_in_Jerusalem-Museums",
"answers": [],
"candidates": [
" Israel Museum *Shrine of the Book *Rockefeller Museum of Archeology Bible Lands Museum Jerusalem Yad Vashem Holocaust Museum L.A. Mayer Institute for Islamic Art Bloomfield Science Museum Natural History Museum Museum of Italian Jewish Art Ticho House Tower of David Jerusalem Tax Museum Herzl Museum Siebenberg House Museums.",
"Museum on the Seam "
]
},
{
"article": "Israel",
"section": "Israel-Museums",
"answers": [
5
],
"candidates": [
"The Israel Museum in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with an extensive collection of Judaica and European art.",
"Israel's national Holocaust museum, Yad Vashem, is the world central archive of Holocaust-related information.",
"Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv University, is an interactive museum devoted to the history of Jewish communities around the world.",
"Apart from the major museums in large cities, there are high-quality artspaces in many towns and \"kibbutzim\".",
"\"Mishkan Le'Omanut\" on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country.",
"Several Israeli museums are devoted to Islamic culture, including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art, both in Jerusalem.",
"The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history.",
"It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man.",
"A cast of the skull is on display at the Israel Museum."
]
},
{
"article": "L. A. Mayer Institute for Islamic Art",
"section": "L._A._Mayer_Institute_for_Islamic_Art-Abstract",
"answers": [],
"candidates": [
"The L.A. Mayer Institute for Islamic Art (Hebrew: \u05de\u05d5\u05d6\u05d9\u05d0\u05d5\u05df \u05dc.",
"\u05d0.",
"\u05de\u05d0\u05d9\u05e8 \u05dc\u05d0\u05de\u05e0\u05d5\u05ea \u05d4\u05d0\u05e1\u05dc\u05d0\u05dd) is a museum in Jerusalem, Israel, established in 1974.",
"It is located in Katamon, down the road from the Jerusalem Theater.",
"The museum houses Islamic pottery, textiles, jewelry, ceremonial objects and other Islamic cultural artifacts.",
"It is not to be confused with the Islamic Museum, Jerusalem. "
]
},
{
"article": "Islamic Museum, Jerusalem",
"section": "Islamic_Museum,_Jerusalem-Abstract",
"answers": [],
"candidates": [
"The Islamic Museum is a museum on the Temple Mount in the Old City section of Jerusalem.",
"On display are exhibits from ten periods of Islamic history encompassing several Muslim regions.",
"The museum is located adjacent to al-Aqsa Mosque.",
"It is not to be confused with the L. A. Mayer Institute for Islamic Art, also a museum in Jerusalem. "
]
},
{
"article": "L. A. Mayer Institute for Islamic Art",
"section": "L._A._Mayer_Institute_for_Islamic_Art-Contemporary_Arab_art",
"answers": [],
"candidates": [
"In 2008, a group exhibit of contemporary Arab art opened at L.A. Mayer Institute, the first show of local Arab art in an Israeli museum and the first to be mounted by an Arab curator.",
"Thirteen Arab artists participated in the show. "
]
}
],
"q_types": [
"where"
]
}
```
An example from any of the `experiments` data:
```
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? The Israel Museum in Jerusalem is one of Israel 's most important cultural institutions and houses the Dead Sea scrolls , along with an extensive collection of Judaica and European art . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Israel 's national Holocaust museum , Yad Vashem , is the world central archive of Holocaust - related information . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Beth Hatefutsoth ( the Diaspora Museum ) , on the campus of Tel Aviv University , is an interactive museum devoted to the history of Jewish communities around the world . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Apart from the major museums in large cities , there are high - quality artspaces in many towns and " kibbutzim " . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? " Mishkan Le'Omanut " on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? Several Israeli museums are devoted to Islamic culture , including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art , both in Jerusalem . 1
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man . 0
Where are Rockefeller Museum and LA Mayer Institute for Islamic Art ? A cast of the skull is on display at the Israel Museum . 0
```
### Data Fields
#### Answer Selection
##### Data for Analysis
for analysis, the columns are:
* `question`: the question.
* `article`: the Wikipedia article related to this question.
* `section`: the section in the Wikipedia article related to this question.
* `topic`: the topic of this question, where the topics are *MUSIC*, *TV*, *TRAVEL*, *ART*, *SPORT*, *COUNTRY*, *MOVIES*, *HISTORICAL EVENTS*, *SCIENCE*, *FOOD*.
* `q_types`: the list of question types, where the types are *what*, *why*, *when*, *who*, *where*, and *how*. If empty, none of the those types are recognized in this question.
* `is_paraphrase`: *True* if this question is a paragraph of some other question in this dataset; otherwise, *False*.
* `candidates`: the list of sentences in the related section.
* `answers`: the list of candidate indices containing the answer context of this question.
##### Data for Experiments
for experiments, each column gives:
* `0`: a question where all tokens are separated.
* `1`: a candidate of the question where all tokens are separated.
* `2`: the label where `0` implies no answer to the question is found in this candidate and `1` implies the answer is found.
#### Answer Triggering
##### Data for Analysis
for analysis, the columns are:
* `question`: the question.
* `article`: the Wikipedia article related to this question.
* `section`: the section in the Wikipedia article related to this question.
* `topic`: the topic of this question, where the topics are *MUSIC*, *TV*, *TRAVEL*, *ART*, *SPORT*, *COUNTRY*, *MOVIES*, *HISTORICAL EVENTS*, *SCIENCE*, *FOOD*.
* `q_types`: the list of question types, where the types are *what*, *why*, *when*, *who*, *where*, and *how*. If empty, none of the those types are recognized in this question.
* `is_paraphrase`: *True* if this question is a paragraph of some other question in this dataset; otherwise, *False*.
* `candidate_list`: the list of 5 candidate sections:
* `article`: the title of the candidate article.
* `section`: the section in the candidate article.
* `candidates`: the list of sentences in this candidate section.
* `answers`: the list of candidate indices containing the answer context of this question (can be empty).
##### Data for Experiments
for experiments, each column gives:
* `0`: a question where all tokens are separated.
* `1`: a candidate of the question where all tokens are separated.
* `2`: the label where `0` implies no answer to the question is found in this candidate and `1` implies the answer is found.
### Data Splits
| |Train| Valid| Test|
| --- | --- | --- | --- |
| Answer Selection | 5529 | 785 | 1590 |
| Answer Triggering | 27645 | 3925 | 7950 |
## Dataset Creation
### Curation Rationale
To encourage research and provide an initial benchmark for selection based question answering and answer triggering tasks
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
Crowdsourced
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better selection-based question answering systems.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Apache License 2.0
### Citation Information
@InProceedings{7814688,
author={T. {Jurczyk} and M. {Zhai} and J. D. {Choi}},
booktitle={2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)},
title={SelQA: A New Benchmark for Selection-Based Question Answering},
year={2016},
volume={},
number={},
pages={820-827},
doi={10.1109/ICTAI.2016.0128}
}
### Contributions
Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset. |
mteb/askubuntudupquestions-reranking | 2022-09-27T19:11:08.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 354 | ---
language:
- en
--- |
mteb/sprintduplicatequestions-pairclassification | 2022-09-27T19:15:57.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 353 | ---
language:
- en
--- |
nielsr/docvqa_1200_examples | 2022-08-05T14:20:07.000Z | [
"region:us"
] | nielsr | null | null | null | 1 | 352 | Entry not found |
sedthh/gutenberg_english | 2023-03-17T09:50:22.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"project gutenberg",
"e-book",
"gutenberg.org",
"region:us"
] | sedthh | null | null | null | 3 | 352 | ---
dataset_info:
features:
- name: TEXT
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 18104255935
num_examples: 48284
download_size: 10748877194
dataset_size: 18104255935
license: mit
task_categories:
- text-generation
language:
- en
tags:
- project gutenberg
- e-book
- gutenberg.org
pretty_name: Project Gutenberg eBooks in English
size_categories:
- 10K<n<100K
---
# Dataset Card for Project Gutenber - English Language eBooks
A collection of non-english language eBooks (48284 rows, 80%+ of all english language books available on the site) from the Project Gutenberg site with metadata removed.
Originally colected for https://github.com/LAION-AI/Open-Assistant (follows the OpenAssistant training format)
The METADATA column contains catalogue meta information on each book as a serialized JSON:
| key | original column |
|----|----|
| language | - |
| text_id | Text# unique book identifier on Prject Gutenberg as *int* |
| title | Title of the book as *string* |
| issued | Issued date as *string* |
| authors | Authors as *string*, comma separated sometimes with dates |
| subjects | Subjects as *string*, various formats |
| locc | LoCC code as *string* |
| bookshelves | Bookshelves as *string*, optional |
## Source data
**How was the data generated?**
- A crawler (see Open-Assistant repository) downloaded the raw HTML code for
each eBook based on **Text#** id in the Gutenberg catalogue (if available)
- The metadata and the body of text are not clearly separated so an additional
parser attempts to split them, then remove transcriber's notes and e-book
related information from the body of text (text clearly marked as copyrighted or
malformed was skipped and not collected)
- The body of cleaned TEXT as well as the catalogue METADATA is then saved as
a parquet file, with all columns being strings
**Copyright notice:**
- Some of the books are copyrighted! The crawler ignored all books
with an english copyright header by utilizing a regex expression, but make
sure to check out the metadata for each book manually to ensure they are okay
to use in your country! More information on copyright:
https://www.gutenberg.org/help/copyright.html and
https://www.gutenberg.org/policy/permission.html
- Project Gutenberg has the following requests when using books without
metadata: _Books obtianed from the Project Gutenberg site should have the
following legal note next to them: "This eBook is for the use of anyone
anywhere in the United States and most other parts of the world at no cost and
with almost" no restrictions whatsoever. You may copy it, give it away or
re-use it under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where you are located
before using this eBook."_ |
d0rj/audiocaps | 2023-06-30T12:17:56.000Z | [
"task_categories:text-to-speech",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"youtube",
"captions",
"region:us"
] | d0rj | null | null | null | 0 | 352 | ---
dataset_info:
features:
- name: audiocap_id
dtype: int64
- name: youtube_id
dtype: string
- name: start_time
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4162928
num_examples: 49838
- name: validation
num_bytes: 198563
num_examples: 2475
- name: test
num_bytes: 454652
num_examples: 4875
download_size: 2781679
dataset_size: 4816143
license: mit
task_categories:
- text-to-speech
language:
- en
multilinguality:
- monolingual
tags:
- youtube
- captions
pretty_name: AudioCaps
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: audiocaps
---
# audiocaps
## Dataset Description
- **Homepage:** https://audiocaps.github.io/
- **Repository:** https://github.com/cdjkim/audiocaps
- **Paper:** [AudioCaps: Generating Captions for Audios in The Wild](https://aclanthology.org/N19-1011.pdf)
HuggingFace mirror of [official data repo](https://github.com/cdjkim/audiocaps). |
jxie/aircraft | 2023-08-16T00:10:15.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 352 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 1729590062.171
num_examples: 6667
- name: validation
num_bytes: 870305261.445
num_examples: 3333
- name: test
num_bytes: 873737634.84
num_examples: 3333
download_size: 3674654885
dataset_size: 3473632958.4560003
---
# Dataset Card for "aircraft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
QingyiSi/Alpaca-CoT | 2023-09-14T08:52:10.000Z | [
"language:en",
"language:zh",
"language:ml",
"license:apache-2.0",
"Instruction",
"Cot",
"region:us"
] | QingyiSi | null | null | null | 491 | 351 | ---
language:
- en
- zh
- ml
tags:
- Instruction
- Cot
license: apache-2.0
datasets:
- dataset1
- dataset2
---
# Instruction-Finetuning Dataset Collection (Alpaca-CoT)
This repository will continuously collect various instruction tuning datasets. And we standardize different datasets into the same format, which can be directly loaded by the [code](https://github.com/PhoebusSi/alpaca-CoT) of Alpaca model.
We also have conducted empirical study on various instruction-tuning datasets based on the Alpaca model, as shown in [https://github.com/PhoebusSi/alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT).
If you think this dataset collection is helpful to you, please `like` this dataset and `star` our [github project](https://github.com/PhoebusSi/alpaca-CoT)!
You are in a warm welcome to provide us with any non-collected instruction-tuning datasets (or their sources). We will uniformly format them, train Alpaca model with these datasets and open source the model checkpoints.
# Contribute
Welcome to join us and become a contributor to this project!
If you want to share some datasets, adjust the data in the following format:
```
example.json
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
Folder should be like this:
```
Alpaca-CoT
|
|----example
| |
| |----example.json
| |
| ----example_context.json
...
```
Create a new pull request in [Community
](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/discussions) and publish your branch when you are ready. We will merge it as soon as we can.
# Data Usage and Resources
## Data Format
All data in this folder is formatted into the same templates, where each sample is as follows:
```
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
## alpaca
#### alpaca_data.json
> This dataset is published by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). It contains 52K English instruction-following samples obtained by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### alpaca_data_cleaned.json
> This dataset is obtained [here](https://github.com/tloen/alpaca-lora). It is a revised version of `alpaca_data.json` by stripping of various tokenization artifacts.
## alpacaGPT4
#### alpaca_gpt4_data.json
> This dataset is published by [Instruction-Tuning-with-GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM).
It contains 52K English instruction-following samples generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
#### alpaca_gpt4_data_zh.json
> This dataset is generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT.
<!-- ## belle_cn
#### belle_data_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 0.5M Chinese instruction-following samples, which is also generated by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### belle_data1M_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 1M Chinese instruction-following samples. The data of `belle_data_cn.json` and `belle_data1M_cn.json` are not duplicated. -->
## Chain-of-Thought
#### CoT_data.json
> This dataset is obtained by formatting the combination of 9 CoT datasets published by [FLAN](https://github.com/google-research/FLAN). It contains 9 CoT tasks involving 74771 samples.
#### CoT_CN_data.json
> This dataset is obtained by tranlating `CoT_data.json` into Chinese, using Google Translate(en2cn).
#### formatted_cot_data folder
> This folder contains the formatted English data for each CoT dataset.
#### formatted_cot_data folder
> This folder contains the formatted Chinese data for each CoT dataset.
## CodeAlpaca
#### code_alpaca.json
> This dataset is published by [codealpaca](https://github.com/sahil280114/codealpaca). It contains code generation task involving 20022 samples.
## finance
#### finance_en.json
> This dataset is collected from [here](https://huggingface.co/datasets/gbharti/finance-alpaca). It contains 68912 financial related instructions in English.
## firefly
#### firefly.json
> his dataset is collected from [here](https://github.com/yangjianxin1/Firefly). It contains 1649398 chinese instructions in 23 nlp tasks.
## GPT4all
#### gpt4all.json
> This dataset is collected from [here](https://github.com/nomic-ai/gpt4all). It contains 806199 en instructions in code, storys and dialogs tasks.
#### gpt4all_without_p3.json
> gpt4all without Bigscience/P3, contains 437605 samples.
## GPTeacher
#### GPTeacher.json
> This dataset is collected from [here](https://github.com/teknium1/GPTeacher). It contains 29013 en instructions generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer.
## Guanaco
#### GuanacoDataset.json
> This dataset is collected from [here](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset). It contains 534610 en instructions generated by text-davinci-003 upon 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition.
#### Guanaco_additional_Dataset.json
> A new additional larger dataset for different languages.
## HC3
#### HC3_ChatGPT.json/HC3_Human.json
> This dataset is collected from [here](https://huggingface.co/datasets/Hello-SimpleAI/HC3). It contains 37175 en/zh instructions generated by ChatGPT and human.
#### HC3_ChatGPT_deduplication.json/HC3_Human_deduplication.json
> HC3 dataset without deduplication instructions.
## instinwild
#### instinwild_en.json & instinwild_cn.json
> The two datasets are obtained [here](https://github.com/XueFuzhao/InstructionWild). It contains 52191 English and 51504 Chinese instructions, which are collected from Twitter, where users tend to share their interesting prompts of mostly generation, open QA, and mind-storm types. (Colossal AI used these datasets to train the ColossalChat model.)
## instruct
#### instruct.json
> The two datasets are obtained [here](https://huggingface.co/datasets/swype/instruct). It contains 888969 English instructions, which are caugmentation performed using the advanced NLP tools provided by AllenAI.
## Natural Instructions
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://github.com/allenai/natural-instructions). It contains 5040134 instructions, which are collected from diverse nlp tasks
## prosocial dialog
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/allenai/prosocial-dialog). It contains 165681 English instructions, which are produuced by GPT-3 rewrites questions and humans feedback
## xP3
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/bigscience/xP3). It contains 78883588 instructions, which are collected by prompts & datasets across 46 of languages & 16 NLP tasks
## Chinese-instruction-collection
> all datasets of Chinese instruction collection
## combination
#### alcapa_plus_belle_data.json
> This dataset is the combination of English `alpaca_data.json` and Chinese `belle_data_cn.json`.
#### alcapa_plus_cot_data.json
> This dataset is the combination of English `alpaca_data.json` and CoT `CoT_data.json`.
#### alcapa_plus_belle_cot_data.json
> This dataset is the combination of English `alpaca_data.json`, Chinese `belle_data_cn.json` and CoT `CoT_data.json`.
## Citation
Please cite the repo if you use the data collection, code, and experimental findings in this repo.
```
@misc{alpaca-cot,
author = {Qingyi Si, Zheng Lin },
school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
title = {Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}
```
Cite the original Stanford Alpaca, BELLE and FLAN papers as well, please.
|
CheshireAI/guanaco-unchained | 2023-08-17T00:12:34.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | CheshireAI | null | null | null | 21 | 351 | ---
license: apache-2.0
language:
- en
pretty_name: Guanaco Unchained
size_categories:
- 1K<n<10K
---
# Guanaco Unchained
"Guanaco Unchained" is a refined and optimized version of the original [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). It is specifically curated to maintain high-quality data while minimizing alignment issues.
The main transformations that were applied to the dataset include:
- Language Filtering: To ensure quality control, most of the non-English prompts were removed.
- AI Identification Removal: Any references suggesting the model's identity as AI, such as "OpenAssistant", "As an AI language model", and similar prompts, were removed. This adjustment allows for a more human-like interaction.
- Content Refining: Responses that indicated refusal, moralizing, or strong subjectivity were either removed or modified to increase accuracy and reduce bias.
- Context Trimming: In scenarios where a human response lacked a corresponding model answer, the last human response was removed to maintain consistency in the instruct pair format.
- Apologetic Language Reduction: The dataset was also revised to remove or modify apologetic language in the responses, thereby ensuring assertiveness and precision.
Dataset Information:
The primary source of the data is the [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). Following this, a series of processing steps (as outlined above) were performed to remove unnecessary or ambiguous elements, resulting in the "Guanaco Unchained" dataset. The structure of the dataset remains consistent with the original Guanaco dataset, containing pairs of human prompts and assistant responses.
Known Limitations:
The dataset was manually curated, and therefore, may contain unintentional errors, oversights, or inconsistencies. Despite the concerted effort to remove all instances of AI identification, there may still be undetected instances. The dataset's multilingual capability may also be reduced due to the removal of non-English prompts.
Additional Information:
The "Guanaco Unchained" dataset is ideally suited for any application that aims for a more human-like interaction with minimized AI identifiers and alignment issues. It is particularly beneficial in contexts where direct, assertive, and high-quality English responses are desired.
|
open_subtitles | 2023-06-01T14:59:58.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:original",
"language:af",
"language:ar",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:ko",
"language:lt",
"language:lv",
"language:mk",
"language:ml",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:zh",
"license:unknown",
"region:us"
] | null | This is a new collection of translated movie subtitles from http://www.opensubtitles.org/.
IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data!
This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking.
62 languages, 1,782 bitexts
total number of files: 3,735,070
total number of tokens: 22.10G
total number of sentence fragments: 3.35G | P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) | null | 29 | 350 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- zh
language_bcp47:
- pt-BR
- ze-EN
- ze-ZH
- zh-CN
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: opensubtitles
pretty_name: OpenSubtitles
dataset_info:
- config_name: bs-eo
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bs
dtype: uint32
- name: eo
dtype: uint32
- name: sentenceIds
struct:
- name: bs
sequence: uint32
- name: eo
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bs
- eo
splits:
- name: train
num_bytes: 1204266
num_examples: 10989
download_size: 333050
dataset_size: 1204266
- config_name: fr-hy
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: fr
dtype: uint32
- name: hy
dtype: uint32
- name: sentenceIds
struct:
- name: fr
sequence: uint32
- name: hy
sequence: uint32
- name: translation
dtype:
translation:
languages:
- fr
- hy
splits:
- name: train
num_bytes: 132450
num_examples: 668
download_size: 41861
dataset_size: 132450
- config_name: da-ru
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: da
dtype: uint32
- name: ru
dtype: uint32
- name: sentenceIds
struct:
- name: da
sequence: uint32
- name: ru
sequence: uint32
- name: translation
dtype:
translation:
languages:
- da
- ru
splits:
- name: train
num_bytes: 1082649105
num_examples: 7543012
download_size: 267995167
dataset_size: 1082649105
- config_name: en-hi
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: en
dtype: uint32
- name: hi
dtype: uint32
- name: sentenceIds
struct:
- name: en
sequence: uint32
- name: hi
sequence: uint32
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 13845544
num_examples: 93016
download_size: 2967295
dataset_size: 13845544
- config_name: bn-is
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bn
dtype: uint32
- name: is
dtype: uint32
- name: sentenceIds
struct:
- name: bn
sequence: uint32
- name: is
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bn
- is
splits:
- name: train
num_bytes: 6371251
num_examples: 38272
download_size: 1411625
dataset_size: 6371251
config_names:
- bn-is
- bs-eo
- da-ru
- en-hi
- fr-hy
---
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/OpenSubtitles.php
E.g.
`dataset = load_dataset("open_subtitles", lang1="fi", lang2="hi")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- no
- pl
- pt
- pt_br: Portuguese (Brazil) (pt-BR)
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- ze_en: English constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- ze_zh: Chinese constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- zh_cn: Simplified Chinese (zh-CN, `zh-Hans`)
- zh_tw: Traditional Chinese (zh-TW, `zh-Hant`)
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
medalpaca/medical_meadow_wikidoc | 2023-04-06T17:05:18.000Z | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | medalpaca | null | null | null | 1 | 350 | ---
license: cc
task_categories:
- question-answering
language:
- en
---
# Dataset Card for WikiDoc
For the dataset containing patient information from wikidoc refer to [this dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
## Dataset Description
- **Source:** https://www.wikidoc.org/index.php/Main_Page
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
This dataset containes medical question-answer pairs extracted from [WikiDoc](https://www.wikidoc.org/index.php/Main_Page),
a collaborative platform for medical professionals to share and contribute to up-to-date medical knowledge.
The platform has to main subsites, the "Living Textbook" and "Patient Information". The "Living Textbook"
contains chapters for various medical specialties, which we crawled. We then used GTP-3.5-Turbo to rephrase
the paragraph heading to a question and used the paragraph as answer. Patient Information is structured differently,
in that each section subheading is already a question, making rephrasing them obsolete.
**Note:** This dataset is still a WIP. While the Q/A pairs from the patient information seems to be mostly correct,
the conversion using GPT-3.5-Turbo yielded some unsatisfactory results in approximately 30% of cases. We are in the process of cleaning this dataset.
### Citation Information
TBA |
result-kand2-sdxl-wuerst-karlo/6155933b | 2023-09-27T13:21:58.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 350 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 215
num_examples: 10
download_size: 1402
dataset_size: 215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6155933b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ml6team/cnn_dailymail_nl | 2022-10-22T14:03:06.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail",
"language:nl",
"license:mit",
"region:us"
] | ml6team | This dataset is the CNN/Dailymail dataset translated to Dutch.
This is the original dataset:
```
load_dataset("cnn_dailymail", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
``` | @article{DBLP:journals/corr/SeeLM17,
author = {Abigail See and
Peter J. Liu and
Christopher D. Manning},
title = {Get To The Point: Summarization with Pointer-Generator Networks},
journal = {CoRR},
volume = {abs/1704.04368},
year = {2017},
url = {http://arxiv.org/abs/1704.04368},
archivePrefix = {arXiv},
eprint = {1704.04368},
timestamp = {Mon, 13 Aug 2018 16:46:08 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/SeeLM17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{hermann2015teaching,
title={Teaching machines to read and comprehend},
author={Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil},
booktitle={Advances in neural information processing systems},
pages={1693--1701},
year={2015}
} | null | 13 | 349 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for Dutch CNN Dailymail Dataset
## Dataset Description
- **Repository:** [CNN / DailyMail Dataset NL repository](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
### Dataset Summary
The Dutch CNN / DailyMail Dataset is a machine-translated version of the English CNN / Dailymail dataset containing just over 300k unique news aticles as written by journalists at CNN and the Daily Mail.
Most information about the dataset can be found on the [HuggingFace page](https://huggingface.co/datasets/cnn_dailymail) of the original English version.
These are the basic steps used to create this dataset (+ some chunking):
```
load_dataset("cnn_dailymail", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
```
### Data Fields
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
- `article`: a string containing the body of the news article
- `highlights`: a string containing the highlight of the article as written by the article author
### Data Splits
The Dutch CNN/DailyMail dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 287,113 |
| Validation | 13,368 |
| Test | 11,490 |
|
ai4bharat/IndicSentiment | 2023-05-26T11:07:29.000Z | [
"region:us"
] | ai4bharat | \ | \ | null | 2 | 349 | Entry not found |
result-kand2-sdxl-wuerst-karlo/94c40829 | 2023-09-27T15:58:35.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 349 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 271
num_examples: 10
download_size: 1428
dataset_size: 271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "94c40829"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/3ec30f64 | 2023-09-27T16:06:02.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 349 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 218
num_examples: 10
download_size: 1399
dataset_size: 218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "3ec30f64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qa_srl | 2022-11-18T21:40:16.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence.
There were 2 datsets used in the paper, newswire and wikipedia. Unfortunately the newswiredataset is built from CoNLL-2009 English training set that is covered under license
Thus, we are providing only Wikipedia training set here. Please check README.md for more details on newswire dataset.
For the Wikipedia domain, randomly sampled sentences from the English Wikipedia (excluding questions and sentences with fewer than 10 or more than 60 words) were taken.
This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {QA-SRL: Question-Answer Driven Semantic Role Labeling},
authors={Luheng He, Mike Lewis, Luke Zettlemoyer},
year={2015}
publisher = {cs.washington.edu},
howpublished={\\url{https://dada.cs.washington.edu/qasrl/#page-top}},
} | null | 1 | 348 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: qa-srl
pretty_name: QA-SRL
dataset_info:
features:
- name: sentence
dtype: string
- name: sent_id
dtype: string
- name: predicate_idx
dtype: int32
- name: predicate
dtype: string
- name: question
sequence: string
- name: answers
sequence: string
config_name: plain_text
splits:
- name: train
num_bytes: 1835549
num_examples: 6414
- name: validation
num_bytes: 632992
num_examples: 2183
- name: test
num_bytes: 637317
num_examples: 2201
download_size: 1087729
dataset_size: 3105858
---
# Dataset Card for QA-SRL
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Homepage](https://dada.cs.washington.edu/qasrl/#page-top)
- **Annotation Tool:** [Annotation tool](https://github.com/luheng/qasrl_annotation)
- **Repository:** [Repository](https://dada.cs.washington.edu/qasrl/#dataset)
- **Paper:** [Qa_srl paper](https://www.aclweb.org/anthology/D15-1076.pdf)
- **Point of Contact:** [Luheng He](luheng@cs.washington.edu)
### Dataset Summary
we model predicate-argument structure of a sentence with a set of question-answer pairs. our method allows practical large-scale annotation of training data. We focus on semantic rather than syntactic annotation, and introduce a scalable method for gathering data that allows both training and evaluation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is in english language.
## Dataset Structure
### Data Instances
We use question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contains a verb predicate in the sentence; the answers are phrases in the sentence. For example:
`UCD finished the 2006 championship as Dublin champions , by beating St Vincents in the final .`
Predicate | Question | Answer
---|---|---|
|Finished|Who finished something? | UCD
|Finished|What did someone finish?|the 2006 championship
|Finished|What did someone finish something as? |Dublin champions
|Finished|How did someone finish something? |by beating St Vincents in the final
|beating | Who beat someone? | UCD
|beating|When did someone beat someone? |in the final
|beating|Who did someone beat?| St Vincents
### Data Fields
Annotations provided are as follows:
- `sentence`: contains tokenized sentence
- `sent_id`: is the sentence identifier
- `predicate_idx`:the index of the predicate (its position in the sentence)
- `predicate`: the predicate token
- `question`: contains the question which is a list of tokens. The question always consists of seven slots, as defined in the paper. The empty slots are represented with a marker “_”. The question ends with question mark.
- `answer`: list of answers to the question
### Data Splits
Dataset | Sentences | Verbs | QAs
--- | --- | --- |---|
**newswire-train**|744|2020|4904|
**newswire-dev**|249|664|1606|
**newswire-test**|248|652|1599
**Wikipedia-train**|`1174`|`2647`|`6414`|
**Wikipedia-dev**|`392`|`895`|`2183`|
**Wikipedia-test**|`393`|`898`|`2201`|
**Please note**
This dataset only has wikipedia data. Newswire dataset needs CoNLL-2009 English training data to get the complete data. This training data is under license. Thus, newswire dataset is not included in this data.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
We annotated over 3000 sentences (nearly 8,000 verbs) in total across two domains: newswire (PropBank) and Wikipedia.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
non-expert annotators were given a short tutorial and a small set of sample annotations (about 10 sentences). Annotators were hired if they showed good understanding of English and the task. The entire screening process usually took less than 2 hours.
#### Who are the annotators?
10 part-time, non-exper annotators from Upwork (Previously oDesk)
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Luheng He](luheng@cs.washington.edu)
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{huggingface:dataset,
title = {QA-SRL: Question-Answer Driven Semantic Role Labeling},
authors={Luheng He, Mike Lewis, Luke Zettlemoyer},
year={2015}
publisher = {cs.washington.edu},
howpublished={\\url{https://dada.cs.washington.edu/qasrl/#page-top}},
}
```
### Contributions
Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/634fb531 | 2023-09-27T16:00:39.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 348 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 273
num_examples: 10
download_size: 1461
dataset_size: 273
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "634fb531"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LysandreJik/glue-mnli-train | 2021-10-12T01:51:04.000Z | [
"region:us"
] | LysandreJik | null | null | null | 0 | 347 | Entry not found |
sil-ai/bloom-captioning | 2022-12-10T02:16:13.000Z | [
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:afr",
"language:af",
"language:aaa",
"language:abc",
"language:ada",
"language:adq",
"language:aeu",
"language:agq",
"language:ags",
"language:ahk",
"language:aia",
"language:ajz",
"language:aka",
"language:ak",
"language:ame",
"language:amh",
"language:am",
"language:amp",
"language:amu",
"language:ann",
"language:aph",
"language:awa",
"language:awb",
"language:azn",
"language:azo",
"language:bag",
"language:bam",
"language:bm",
"language:baw",
"language:bax",
"language:bbk",
"language:bcc",
"language:bce",
"language:bec",
"language:bef",
"language:ben",
"language:bn",
"language:bfd",
"language:bfm",
"language:bfn",
"language:bgf",
"language:bho",
"language:bhs",
"language:bis",
"language:bi",
"language:bjn",
"language:bjr",
"language:bkc",
"language:bkh",
"language:bkm",
"language:bkx",
"language:bob",
"language:bod",
"language:bo",
"language:boz",
"language:bqm",
"language:bra",
"language:brb",
"language:bri",
"language:brv",
"language:bss",
"language:bud",
"language:buo",
"language:bwt",
"language:bwx",
"language:bxa",
"language:bya",
"language:bze",
"language:bzi",
"language:cak",
"language:cbr",
"language:ceb",
"language:cgc",
"language:chd",
"language:chp",
"language:cim",
"language:clo",
"language:cmn",
"language:zh",
"language:cmo",
"language:csw",
"language:cuh",
"language:cuv",
"language:dag",
"language:ddg",
"language:ded",
"language:deu",
"language:de",
"language:dig",
"language:dje",
"language:dmg",
"language:dnw",
"language:dtp",
"language:dtr",
"language:dty",
"language:dug",
"language:eee",
"language:ekm",
"language:enb",
"language:enc",
"language:eng",
"language:en",
"language:ewo",
"language:fas",
"language:fa",
"language:fil",
"language:fli",
"language:fon",
"language:fra",
"language:fr",
"language:fub",
"language:fuh",
"language:gal",
"language:gbj",
"language:gou",
"language:gsw",
"language:guc",
"language:guj",
"language:gu",
"language:guz",
"language:gwc",
"language:hao",
"language:hat",
"language:ht",
"language:hau",
"language:ha",
"language:hbb",
"language:hig",
"language:hil",
"language:hin",
"language:hi",
"language:hla",
"language:hna",
"language:hre",
"language:hro",
"language:idt",
"language:ilo",
"language:ind",
"language:id",
"language:ino",
"language:isu",
"language:ita",
"language:it",
"language:jgo",
"language:jmx",
"language:jpn",
"language:ja",
"language:jra",
"language:kak",
"language:kam",
"language:kan",
"language:kn",
"language:kau",
"language:kr",
"language:kbq",
"language:kbx",
"language:kby",
"language:kek",
"language:ken",
"language:khb",
"language:khm",
"language:km",
"language:kik",
"language:ki",
"language:kin",
"language:rw",
"language:kir",
"language:ky",
"language:kjb",
"language:kmg",
"language:kmr",
"language:ku",
"language:kms",
"language:kmu",
"language:kor",
"language:ko",
"language:kqr",
"language:krr",
"language:ksw",
"language:kur",
"language:kvt",
"language:kwd",
"language:kwu",
"language:kwx",
"language:kxp",
"language:kyq",
"language:laj",
"language:lan",
"language:lao",
"language:lo",
"language:lbr",
"language:lfa",
"language:lgg",
"language:lgr",
"language:lhm",
"language:lhu",
"language:lkb",
"language:llg",
"language:lmp",
"language:lns",
"language:loh",
"language:lsi",
"language:lts",
"language:lug",
"language:lg",
"language:luy",
"language:lwl",
"language:mai",
"language:mal",
"language:ml",
"language:mam",
"language:mar",
"language:mr",
"language:mdr",
"language:mfh",
"language:mfj",
"language:mgg",
"language:mgm",
"language:mgo",
"language:mgq",
"language:mhx",
"language:miy",
"language:mkz",
"language:mle",
"language:mlk",
"language:mlw",
"language:mmu",
"language:mne",
"language:mnf",
"language:mnw",
"language:mot",
"language:mqj",
"language:mrn",
"language:mry",
"language:msb",
"language:muv",
"language:mve",
"language:mxu",
"language:mya",
"language:my",
"language:myk",
"language:myx",
"language:mzm",
"language:nas",
"language:nco",
"language:nep",
"language:ne",
"language:new",
"language:nge",
"language:ngn",
"language:nhx",
"language:njy",
"language:nla",
"language:nld",
"language:nl",
"language:nlv",
"language:nod",
"language:nsk",
"language:nsn",
"language:nso",
"language:nst",
"language:nuj",
"language:nwe",
"language:nwi",
"language:nxa",
"language:nxl",
"language:nya",
"language:ny",
"language:nyo",
"language:nyu",
"language:nza",
"language:odk",
"language:oji",
"language:oj",
"language:oki",
"language:omw",
"language:ori",
"language:or",
"language:ozm",
"language:pae",
"language:pag",
"language:pan",
"language:pa",
"language:pbt",
"language:pce",
"language:pcg",
"language:pdu",
"language:pea",
"language:pex",
"language:pis",
"language:pkb",
"language:pmf",
"language:pnz",
"language:por",
"language:pt",
"language:psp",
"language:pwg",
"language:qaa",
"language:qub",
"language:quc",
"language:quf",
"language:quz",
"language:qve",
"language:qvh",
"language:qvm",
"language:qvo",
"language:qxh",
"language:rel",
"language:rnl",
"language:ron",
"language:ro",
"language:roo",
"language:rue",
"language:rug",
"language:rus",
"language:ru",
"language:san",
"language:sa",
"language:saq",
"language:sat",
"language:sdk",
"language:sea",
"language:sgd",
"language:shn",
"language:sml",
"language:snk",
"language:snl",
"language:som",
"language:so",
"language:sot",
"language:st",
"language:sox",
"language:spa",
"language:es",
"language:sps",
"language:ssn",
"language:stk",
"language:swa",
"language:sw",
"language:swh",
"language:sxb",
"language:syw",
"language:taj",
"language:tam",
"language:ta",
"language:tbj",
"language:tdb",
"language:tdg",
"language:tdt",
"language:teo",
"language:tet",
"language:tgk",
"language:tg",
"language:tha",
"language:th",
"language:the",
"language:thk",
"language:thl",
"language:thy",
"language:tio",
"language:tkd",
"language:tnl",
"language:tnn",
"language:tnp",
"language:tnt",
"language:tod",
"language:tom",
"language:tpi",
"language:tpl",
"language:tpu",
"language:tsb",
"language:tsn",
"language:tn",
"language:tso",
"language:ts",
"language:tuv",
"language:tuz",
"language:tvs",
"language:udg",
"language:unr",
"language:urd",
"language:ur",
"language:uzb",
"language:uz",
"language:ven",
"language:ve",
"language:vie",
"language:vi",
"language:vif",
"language:war",
"language:wbm",
"language:wbr",
"language:wms",
"language:wni",
"language:wnk",
"language:wtk",
"language:xho",
"language:xh",
"language:xkg",
"language:xmd",
"language:xmg",
"language:xmm",
"language:xog",
"language:xty",
"language:yas",
"language:yav",
"language:ybb",
"language:ybh",
"language:ybi",
"language:ydd",
"language:yea",
"language:yet",
"language:yid",
"language:yi",
"language:yin",
"language:ymp",
"language:zaw",
"language:zho",
"language:zlm",
"language:zuh",
"language:zul",
"language:zu",
"license:cc-by-nc-4.0",
"region:us"
] | sil-ai | """
_HOMEPAGE = | """
# _URL_FOR_BLOOM_VIST_ANNOTATIONS = "https://bloom-vist.s3.amazonaws.com/bloom-vist.json" # outdated
_URL_FOR_BLOOM_VIST_ANNOTATIONS = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15.json" # updated with more captions, etc.
# _URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15_deduped_by_album_and_story.json" # above, but deduped albums and stories
_URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15_deduped.json" # above, but deduped albums and stories
_URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED_FILTERED = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15_deduped_langfiltered.json" # above, but deduped albums and stories
_URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED_FILTERED_STORYLETS = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15_deduped_june21_langfiltered_june22_with_storylets.json" # above, but added in "storylet_ids".
_URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED_FILTERED_STORYLETS_LICENSE_FIXED = "https://bloom-vist.s3.amazonaws.com/bloom_vist_june15_deduped_june21_langfiltered_june22_with_storylets_licenseupdated.json" # updated licenses.
_URL = _URL_FOR_BLOOM_VIST_ANNOTATIONS_DEDUPED_FILTERED_STORYLETS_LICENSE_FIXED # use this one!
# TODO: upload splits (June 15)
_URL_FOR_PRECOMPUTED_SPLIT_FILE = "https://huggingface.co/datasets/sil-ai/bloom-captioning/resolve/main/data/precomputed_split_urls.json" # TODO:
def codes_match(requested_lang, caption_lang_original_code):
alpha3_normalized_code_for_caption = _BLOOM_LANGUAGES_ALPHA3_CONVERSION_DICT[caption_lang_original_code]
if requested_lang == caption_lang_original_code or requested_lang == alpha3_normalized_code_for_caption:
return True
else:
return False
def story_quarantined(bloom_vist_annotations_dict, story_id):
metadata_for_story = bloom_vist_annotations_dict["stories"][story_id]
filter_results = []
for filter_method in metadata_for_story["filter_methods"].keys():
filter_result = metadata_for_story["filter_methods"][filter_method]
filter_results.append(filter_result["quarantine_result"])
return any(filter_results)
def vist_annotations_to_image_captioning(bloom_vist_annotations_dict, requested_lang): | null | 12 | 347 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- afr
- af
- aaa
- abc
- ada
- adq
- aeu
- agq
- ags
- ahk
- aia
- ajz
- aka
- ak
- ame
- amh
- am
- amp
- amu
- ann
- aph
- awa
- awb
- azn
- azo
- bag
- bam
- bm
- baw
- bax
- bbk
- bcc
- bce
- bec
- bef
- ben
- bn
- bfd
- bfm
- bfn
- bgf
- bho
- bhs
- bis
- bi
- bjn
- bjr
- bkc
- bkh
- bkm
- bkx
- bob
- bod
- bo
- boz
- bqm
- bra
- brb
- bri
- brv
- bss
- bud
- buo
- bwt
- bwx
- bxa
- bya
- bze
- bzi
- cak
- cbr
- ceb
- cgc
- chd
- chp
- cim
- clo
- cmn
- zh
- cmo
- csw
- cuh
- cuv
- dag
- ddg
- ded
- deu
- de
- dig
- dje
- dmg
- dnw
- dtp
- dtr
- dty
- dug
- eee
- ekm
- enb
- enc
- eng
- en
- ewo
- fas
- fa
- fil
- fli
- fon
- fra
- fr
- fub
- fuh
- gal
- gbj
- gou
- gsw
- guc
- guj
- gu
- guz
- gwc
- hao
- hat
- ht
- hau
- ha
- hbb
- hig
- hil
- hin
- hi
- hla
- hna
- hre
- hro
- idt
- ilo
- ind
- id
- ino
- isu
- ita
- it
- jgo
- jmx
- jpn
- ja
- jra
- kak
- kam
- kan
- kn
- kau
- kr
- kbq
- kbx
- kby
- kek
- ken
- khb
- khm
- km
- kik
- ki
- kin
- rw
- kir
- ky
- kjb
- kmg
- kmr
- ku
- kms
- kmu
- kor
- ko
- kqr
- krr
- ksw
- kur
- ku
- kvt
- kwd
- kwu
- kwx
- kxp
- kyq
- laj
- lan
- lao
- lo
- lbr
- lfa
- lgg
- lgr
- lhm
- lhu
- lkb
- llg
- lmp
- lns
- loh
- lsi
- lts
- lug
- lg
- luy
- lwl
- mai
- mal
- ml
- mam
- mar
- mr
- mdr
- mfh
- mfj
- mgg
- mgm
- mgo
- mgq
- mhx
- miy
- mkz
- mle
- mlk
- mlw
- mmu
- mne
- mnf
- mnw
- mot
- mqj
- mrn
- mry
- msb
- muv
- mve
- mxu
- mya
- my
- myk
- myx
- mzm
- nas
- nco
- nep
- ne
- new
- nge
- ngn
- nhx
- njy
- nla
- nld
- nl
- nlv
- nod
- nsk
- nsn
- nso
- nst
- nuj
- nwe
- nwi
- nxa
- nxl
- nya
- ny
- nyo
- nyu
- nza
- odk
- oji
- oj
- oki
- omw
- ori
- or
- ozm
- pae
- pag
- pan
- pa
- pbt
- pce
- pcg
- pdu
- pea
- pex
- pis
- pkb
- pmf
- pnz
- por
- pt
- psp
- pwg
- qaa
- qub
- quc
- quf
- quz
- qve
- qvh
- qvm
- qvo
- qxh
- rel
- rnl
- ron
- ro
- roo
- rue
- rug
- rus
- ru
- san
- sa
- saq
- sat
- sdk
- sea
- sgd
- shn
- sml
- snk
- snl
- som
- so
- sot
- st
- sox
- spa
- es
- sps
- ssn
- stk
- swa
- sw
- swh
- sxb
- syw
- taj
- tam
- ta
- tbj
- tdb
- tdg
- tdt
- teo
- tet
- tgk
- tg
- tha
- th
- the
- thk
- thl
- thy
- tio
- tkd
- tnl
- tnn
- tnp
- tnt
- tod
- tom
- tpi
- tpl
- tpu
- tsb
- tsn
- tn
- tso
- ts
- tuv
- tuz
- tvs
- udg
- unr
- urd
- ur
- uzb
- uz
- ven
- ve
- vie
- vi
- vif
- war
- wbm
- wbr
- wms
- wni
- wnk
- wtk
- xho
- xh
- xkg
- xmd
- xmg
- xmm
- xog
- xty
- yas
- yav
- ybb
- ybh
- ybi
- ydd
- yea
- yet
- yid
- yi
- yin
- ymp
- zaw
- zho
- zh
- zlm
- zuh
- zul
- zu
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_ids:
- image-captioning
paperswithcode_id: null
pretty_name: BloomCaptioning
extra_gated_prompt: |-
One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled explicitly with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.
These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that:
1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co).
2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material.
In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree:
1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.
2. That your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the image captioning task. It includes data from 351 languages across 31 language families. There is a mean of 32 stories and 319 image-caption pairs per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 351 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul
## Dataset Statistics
Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following statistics:
| ISO 639-3 | stories | image-caption pairs |
|:------------|-----------:|-----------------------:|
| ahk | 101 | 907 |
| awa | 163 | 1200 |
| bam | 4 | 86 |
| ben | 251 | 2235 |
| bho | 173 | 1172 |
| boz | 5 | 102 |
| bzi | 66 | 497 |
| cak | 67 | 817 |
| ceb | 418 | 2953 |
| cgc | 197 | 1638 |
| chd | 1 | 84 |
| dty | 172 | 1310 |
| eng | 2633 | 28618 |
| fas | 129 | 631 |
| fra | 403 | 5278 |
| hat | 260 | 2411 |
| hau | 256 | 1865 |
| hbb | 27 | 273 |
| ind | 259 | 2177 |
| jra | 139 | 1423 |
| kak | 195 | 1416 |
| kan | 21 | 168 |
| kek | 36 | 621 |
| kir | 382 | 4026 |
| kjb | 102 | 984 |
| kor | 132 | 2773 |
| mai | 180 | 1211 |
| mai | 180 | 1211 |
| mam | 134 | 1317 |
| mhx | 98 | 945 |
| mya | 38 | 421 |
| myk | 34 | 341 |
| nep | 200 | 1507 |
| new | 177 | 1225 |
| por | 163 | 3101 |
| quc | 99 | 817 |
| rus | 353 | 3933 |
| sdk | 11 | 153 |
| snk | 35 | 356 |
| spa | 528 | 6111 |
| stk | 7 | 113 |
| tgl | 0 | 0 |
| tha | 285 | 3023 |
| thl | 185 | 1464 |
| tpi | 201 | 2162 |
## Dataset Structure
### Data Instances
The examples look like this for Hausa:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset("sil-ai/bloom-captioning", iso639_3_letter_code,
use_auth_token=True, download_mode='force_redownload')
# An entry in the dataset consists of a image caption along with
# a link to the corresponding image (and various pieces of metadata).
print(dataset['train'][0])
```
This would produce an output:
```
{'image_id': '5e7e2ab6-493f-4430-a635-695fbff76cf0',
'image_url': 'https://bloom-vist.s3.amazonaws.com/%E0%A4%AF%E0%A5%87%E0%A4%B8%E0%A5%81%20%E0%A4%9A%E0%A5%81%E0%A4%B5%E0%A4%BE%20%E0%A4%89%E0%A4%A0%E0%A5%81%E0%A4%99%E0%A5%8D%E2%80%8C%E0%A4%99%E0%A4%BF%20%E0%A4%B2%E0%A4%BE%E0%A4%AE%E0%A5%8D%E2%80%8C%E0%A4%9F%E0%A4%BF%E0%A4%AF%E0%A4%BE%E0%A4%A8%E0%A4%BE/image2.jpg',
'caption': 'Lokacinan almajiran suna tuƙa jirgin ruwansu, amma can cikin dare sun kai tsakiyar tafkin kaɗai. Suna tuƙi da wahala saboda iska tana busawa da ƙarfi gaba da su.',
'story_id': 'cd17125d-66c6-467c-b6c3-7463929faff9',
'album_id': 'a3074fc4-b88f-4769-a6de-dc952fdb35f0',
'original_bloom_language_tag': 'ha',
'index_in_story': 0}
```
To download all of the images locally directory `images`, you can do something similar to the following:
```
from PIL import Image
import urllib
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
if 'png' in image_url:
png = Image.open(io.BytesIO(req.read())).convert('RGBA')
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
image_id = str(uuid.uuid4())
image_path = "images/" + image_id + ".jpg"
background.save(image_path, 'JPEG', quality=80)
else:
image = Image.open(io.BytesIO(req.read()))
image_id = str(uuid.uuid4())
image_path = "images/" + image_id + ".jpg"
image.save(image_path)
return image_path
def fetch_images(batch, num_threads, timeout=None, retries=3):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image_path"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dataset = dataset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Data Fields
The metadata fields below are available:
- **image_id**: a unique ID for the image
- **image_url**: a link for downloading the image
- **caption**: a caption corresponding to the image
- **story_id**: a unique ID for the corresponding story in which the caption appears
- **album_id**: a unique ID for the corresponding album in which the image appears
- **original_bloom_language_tag**: the original language identification from the Bloom library
- **index_in_story**: an index corresponding to the order of the image-caption pair in the corresponding story
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
**NOTE:** The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder `<hidden>` token.
## Changelog
- **25 October 2022** - Initial release
- **25 October 2022** - Update to include licenses on each data item.
|
adsabs/FOCAL | 2023-10-06T15:13:28.000Z | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"astronomy",
"region:us"
] | adsabs | null | null | null | 1 | 347 | ---
annotations_creators:
- expert-generated
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
tags:
- astronomy
---
# Function Of Citation in Astrophysics Literature (FOCAL): Dataset and Task
*Can you explain why the authors made a given citation?*
This dataset was created as a [shared task](https://ui.adsabs.harvard.edu/WIESP/2023/shared_task_1) for [WIESP @ AACL-IJCNLP 2023](https://ui.adsabs.harvard.edu/WIESP/2023/).
## Dataset Description
Datasets are in JSON Lines format (each line is a json dictionary).
Each entry consists of a dictionary with the following keys:
- `"Identifier"`: unique string to identify the entry
- `"Paragraph"`: text string from an astrophysics paper
- `"Citation Text"`: list of strings forming the citation (most often a single string, but sometimes the citation text is split up)
- `"Citation Start End"`: list of integer pairs denoting where the citation starts and end in `"Paragraph"` (most often a single pair, sometimes the citation text is split up, if so follows the order in `"Citation Text"`)
- `"Functions Text"`: list of strings highlighting parts of the paragraph that explain the function of the citation
- `"Functions Label"`: list of strings with the label for each text element in `"Functions Text"` (in same order)
- `"Functions Start End"`: list of integer pairs denoting where the elements in `"Functions Text"` start and end in `"Paragraph"`(in same order)
start and end are defined by the character position in the `"Paragraph"` string.
## Instructions for Workshop Participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/FOCAL") # !!! Only loads the training split. Validation and testing splits will be added after the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/) has ended.
```
How to load the data if you cloned the repository locally:
(assuming `./FOCAL-TRAINING.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./FOCAL-TRAINING.jsonl", 'r') as f:
focal_training_from_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
focal_training_from_json = Dataset.from_json(path_or_paths="./FOCAL-TRAINING.jsonl")
```
## File List
```
├── FOCAL-TRAINING.jsonl (2421 samples for training)
├── FOCAL-VALIDATION-NO-LABELS.jsonl (606 samples for validation without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── FOCAL-TESTING-NO-LABELS.jsonl (821 samples for testing without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_seqeval.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_labels_only.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /data/train.parquet (train split of FOCAL)
├── README.MD (this file)
└──
```
Maintainer: Felix Grezes (ORCID: 0000-0001-8714-7774)
Data annotator: Tom Allen (ORCID: 0000-0002-5532-4809) |
AdaptLLM/medicine-tasks | 2023-09-26T08:36:39.000Z | [
"arxiv:2309.09530",
"region:us"
] | AdaptLLM | null | null | null | 1 | 347 | ---
configs:
- config_name: ChemProt
data_files:
- split: test
path: "ChemProt/test.json"
- config_name: MQP
data_files:
- split: test
path: "MedQs/test.json"
- config_name: PubMedQA
data_files:
- split: test
path: "pubmed_qa/test.json"
- config_name: RCT
data_files:
- split: test
path: "RCT/test.json"
- config_name: USMLE
data_files:
- split: test
path: "usmle/test.json"
---
# Adapting Large Language Models via Reading Comprehension
This repo contains the evaluation datasets for our paper [Adapting Large Language Models via Reading Comprehension](https://arxiv.org/pdf/2309.09530.pdf)
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in **biomedicine, finance, and law domains**. Our 7B model competes with much larger domain-specific models like BloombergGPT-50B. Moreover, our domain-specific reading comprehension texts enhance model performance even on general benchmarks, indicating potential for developing a general LLM across more domains.
## GitHub repo:
https://github.com/microsoft/LMOps
## Domain-specific LLMs:
Our models of different domains are now available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="./comparison.png" width="700">
</p>
## Domain-specific Tasks:
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
## Citation:
```bibtex
@inproceedings{AdaptLLM,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
url={https://arxiv.org/abs/2309.09530},
year={2023},
}
```
|
mstz/diamonds | 2023-04-16T17:27:20.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"student performance",
"tabular_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 346 | ---
language:
- en
tags:
- student performance
- tabular_classification
- multiclass_classification
- UCI
pretty_name: Diamond
size_categories:
- 10K<n<100K
task_categories:
- tabular-classification
configs:
- encoding
- cut
- cut_binary
license: cc
---
# Diamonds
The [Diamonds dataset](https://www.kaggle.com/datasets/ulrikthygepedersen/diamonds) from Kaggle.
Dataset collecting properties of cut diamonds to determine the cut quality.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| cut | Multiclass classification | Predict the cut quality of the diamond. |
| cut_binary | Binary classification | Is the cut quality at least very good?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/diamonds", "cut")["train"]
```
# Features
|**Feature** |**Description**|
|-----------------------------------|---------------|
|`carat` | `float32` |
|`color` | `string` |
|`clarity` | `float32` |
|`depth` | `float32` |
|`table` | `float32` |
|`price` | `float32` |
|`observation_point_on_axis_x` | `float32` |
|`observation_point_on_axis_y` | `float32` |
|`observation_point_on_axis_z` | `float32` |
|`cut` | `int8` | |
result-kand2-sdxl-wuerst-karlo/52b331a6 | 2023-09-27T18:08:07.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 346 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 189
num_examples: 10
download_size: 1383
dataset_size: 189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "52b331a6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
circa | 2023-01-25T14:28:00.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"question-answer-pair-classification",
"arxiv:2010.03450",
"region:us"
] | null | The Circa (meaning ‘approximately’) dataset aims to help machine learning systems
to solve the problem of interpreting indirect answers to polar questions.
The dataset contains pairs of yes/no questions and indirect answers, together with
annotations for the interpretation of the answer. The data is collected in 10
different social conversational situations (eg. food preferences of a friend).
NOTE: There might be missing labels in the dataset and we have replaced them with -1.
The original dataset contains no train/dev/test splits. | @InProceedings{louis_emnlp2020,
author = "Annie Louis and Dan Roth and Filip Radlinski",
title = ""{I}'d rather just go to bed": {U}nderstanding {I}ndirect {A}nswers",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods
in Natural Language Processing",
year = "2020",
} | null | 2 | 345 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: circa
pretty_name: CIRCA
tags:
- question-answer-pair-classification
dataset_info:
features:
- name: context
dtype: string
- name: question-X
dtype: string
- name: canquestion-X
dtype: string
- name: answer-Y
dtype: string
- name: judgements
dtype: string
- name: goldstandard1
dtype:
class_label:
names:
'0': 'Yes'
'1': 'No'
'2': In the middle, neither yes nor no
'3': Probably yes / sometimes yes
'4': Probably no
'5': Yes, subject to some conditions
'6': Other
'7': I am not sure how X will interpret Y’s answer
- name: goldstandard2
dtype:
class_label:
names:
'0': 'Yes'
'1': 'No'
'2': In the middle, neither yes nor no
'3': Yes, subject to some conditions
'4': Other
splits:
- name: train
num_bytes: 8149489
num_examples: 34268
download_size: 7766077
dataset_size: 8149489
---
# Dataset Card for CIRCA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIRCA homepage](https://github.com/google-research-datasets/circa)
- **Repository:** [CIRCA repository](https://github.com/google-research-datasets/circa)
- **Paper:** ["I’d rather just go to bed”: Understanding Indirect Answers](https://arxiv.org/abs/2010.03450)
- **Point of Contact:** [Circa team, Google](circa@google.com)
### Dataset Summary
The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions.
The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend).
The following are the situational contexts for the dialogs in the data.
```
1. X wants to know about Y’s food preferences
2. X wants to know what activities Y likes to do during weekends.
3. X wants to know what sorts of books Y likes to read.
4. Y has just moved into a neighbourhood and meets his/her new neighbour X.
5. X and Y are colleagues who are leaving work on a Friday at the same time.
6. X wants to know about Y's music preferences.
7. Y has just travelled from a different city to meet X.
8. X and Y are childhood neighbours who unexpectedly run into each other at a cafe.
9. Y has just told X that he/she is thinking of buying a flat in New York.
10. Y has just told X that he/she is considering switching his/her job.
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
The columns indicate:
```
1. id : unique id for the question-answer pair
2. context : the social situation for the dialogue. One of 10 situations (see next section). Each
situation is a dialogue between a person who poses the question (X) and the person who
answers (Y).
3. question-X : the question posed by X
4. canquestion-X : a (automatically) rewritten version of question into declarative form
Eg. Do you like Italian? --> I like Italian. See the paper for details.
5. answer-Y : the answer given by Y to X
6. judgements : the interpretations for the QA pair from 5 annotators. The value is a list of 5 strings,
separated by the token ‘#’
7. goldstandard1 : a gold standard majority judgement from the annotators. The value is the most common
interpretation and picked by at least 3 (out of 5 annotators). When a majority
judgement was not reached by the above criteria, the value is ‘NA’
8. goldstandard2 : Here the labels ‘Probably yes / sometimes yes’, ‘Probably no', and 'I am not sure how
X will interpret Y’s answer' are mapped respectively to ‘Yes’, ‘No’, and 'In the
middle, neither yes nor no’ before computing the majority. Still the label must be given
at least 3 times to become the majority choice. This method represents a less strict way
of analyzing the interpretations.
```
### Data Fields
```
id : 1
context : X wants to know about Y's food preferences.
question-X : Are you vegan?
canquestion-X : I am vegan.
answer-Y : I love burgers too much.
judgements : no#no#no#no#no
goldstandard1 : no (label(s) used for the classification task)
goldstandard2 : no (label(s) used for the classification task)
```
### Data Splits
There are no explicit train/val/test splits in this dataset.
## Dataset Creation
### Curation Rationale
They revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task.
### Source Data
#### Initial Data Collection and Normalization
The QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in [EMNLP 2020 paper](https://arxiv.org/pdf/2010.03450.pdf). Below is a brief overview only.
Phase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are:
```
‘asking a friend for food preferences’
‘meeting your childhood neighbour’
‘your friend wants to buy a flat in New York’
```
Annotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions.
Phase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words.
Phase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from:
```
* 'Yes'
* 'Probably yes' / 'sometimes yes'
* 'Yes, subject to some conditions'
* 'No'
* 'Probably no'
* 'In the middle, neither yes nor no'
* 'I am not sure how X will interpret Y's answer'
```
#### Who are the source language producers?
The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is the work of Annie Louis, Dan Roth, and Filip Radlinski from Google LLC.
### Licensing Information
This dataset was made available under the Creative Commons Attribution 4.0 License. A full copy of the license can be found at https://creativecommons.org/licenses/by-sa/4.0/e and link to the license webpage if available.
### Citation Information
```
@InProceedings{louis_emnlp2020,
author = "Annie Louis and Dan Roth and Filip Radlinski",
title = ""{I}'d rather just go to bed": {U}nderstanding {I}ndirect {A}nswers",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
year = "2020",
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
codeparrot/xlcost-text-to-code | 2022-10-25T09:30:47.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:cc-by-sa-4.0",
"arxiv:2206.08474",
"region:us"
] | codeparrot | XLCoST is a machine learning benchmark dataset that contains fine-grained parallel data in 7 commonly used programming languages (C++, Java, Python, C#, Javascript, PHP, C), and natural language (English). | @misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
} | null | 22 | 345 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: xlcost-text-to-code
---
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at snippet level and program level for **7** programming languages: `Python, C, C#, C++, Java, Javascript and PHP`.
## Languages
The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
## Dataset Structure
To load the dataset you need to specify a subset among the **14 exiting instances**: `LANGUAGE-snippet-level/LANGUAGE-program-level` for `LANGUAGE` in `[Python, C, Csharp, C++, Java, Javascript and PHP]`. By default `Python-snippet-level` is loaded.
```python
from datasets import load_dataset
load_dataset("codeparrot/xlcost-text-to-code", "Python-program-level")
DatasetDict({
train: Dataset({
features: ['text', 'code'],
num_rows: 9263
})
test: Dataset({
features: ['text', 'code'],
num_rows: 887
})
validation: Dataset({
features: ['text', 'code'],
num_rows: 472
})
})
```
```python
next(iter(data["train"]))
{'text': 'Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ; Driver code',
'code': 'def maxPresum ( a , b ) : NEW_LINE INDENT X = max ( a [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( a ) ) : NEW_LINE INDENT a [ i ] += a [ i - 1 ] NEW_LINE X = max ( X , a [ i ] ) NEW_LINE DEDENT Y = max ( b [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( b ) ) : NEW_LINE INDENT b [ i ] += b [ i - 1 ] NEW_LINE Y = max ( Y , b [ i ] ) NEW_LINE DEDENT return X + Y NEW_LINE DEDENT A = [ 2 , - 1 , 4 , - 5 ] NEW_LINE B = [ 4 , - 3 , 12 , 4 , - 3 ] NEW_LINE print ( maxPresum ( A , B ) ) NEW_LINE'}
```
Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of `\n` and INDENT instead of `\t`, DEDENT to cancel indentation...
## Data Fields
* text: natural language description/comment
* code: code at snippet/program level
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` |
Babelscape/multinerd | 2023-04-20T12:43:31.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"language:zh",
"license:cc-by-nc-sa-4.0",
"structure-prediction",
"region:us"
] | Babelscape | null | null | null | 7 | 345 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
- zh
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: multinerd-dataset
tags:
- structure-prediction
---
## Table of Contents
- [Description](#description)
- [Dataset Structure](#dataset-structure)
- [Additional Information](#additional-information)
## Dataset Card for MultiNERD dataset
## Dataset Description
- **Summary:** Training data for fine-grained NER in 10 languages.
- **Repository:** [https://github.com/Babelscape/multinerd](https://github.com/Babelscape/multinerd)
- **Paper:** [https://aclanthology.org/multinerd](https://aclanthology.org/2022.findings-naacl.60/)
- **Point of Contact:** [tedeschi@babelscape.com](tedeschi@babelscape.com)
## Description
- **Summary:** In a nutshell, MultiNERD is the first **language-agnostic** methodology for automatically creating **multilingual, multi-genre and fine-grained annotations** for **Named Entity Recognition** and **Entity Disambiguation**. Specifically, it can be seen an extension of the combination of two prior works from our research group that are [WikiNEuRal](https://www.github.com/Babelscape/wikineural), from which we took inspiration for the state-of-the-art silver-data creation methodology, and [NER4EL](https://www.github.com/Babelscape/NER4EL), from which we took the fine-grained classes and inspiration for the entity linking part. The produced dataset covers: **10 languages** (Chinese, Dutch, English, French, German, Italian, Polish, Portuguese, Russian and Spanish), **15 NER categories** (Person (PER), Location (LOC), Organization (ORG}), Animal (ANIM), Biological entity (BIO), Celestial Body (CEL), Disease (DIS), Event (EVE), Food (FOOD), Instrument (INST), Media (MEDIA), Plant (PLANT), Mythological entity (MYTH), Time (TIME) and Vehicle (VEHI)), and **2 textual genres** ([Wikipedia](https://www.wikipedia.org/) and [WikiNews](https://www.wikinews.org/));
- **Repository:** [https://github.com/Babelscape/multinerd](https://github.com/Babelscape/multinerd)
- **Paper:** [https://aclanthology.org/multinerd](https://aclanthology.org/2022.findings-naacl.60/)
- **Point of Contact:** [tedeschi@babelscape.com](tedeschi@babelscape.com)
## Dataset Structure
The data fields are the same among all splits.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`).
- `lang`: a `string` feature. Full list of language: Chinese (zh), Dutch (nl), English (en), French (fr), German (de), Italian (it), Polish (pl), Portugues (pt), Russian (ru), Spanish (es).
- The full tagset with indices is reported below:
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-ORG": 3,
"I-ORG": 4,
"B-LOC": 5,
"I-LOC": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-MYTH": 23,
"I-MYTH": 24,
"B-PLANT": 25,
"I-PLANT": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
}
```
## Additional Information
- **Licensing Information**: Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
- **Citation Information**: Please consider citing our work if you use data and/or code from this repository.
```bibtex
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
}
```
- **Contributions**: Thanks to [@sted97](https://github.com/sted97) for adding this dataset.
|
jxie/flickr8k | 2023-06-25T22:25:03.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 345 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption_0
dtype: string
- name: caption_1
dtype: string
- name: caption_2
dtype: string
- name: caption_3
dtype: string
- name: caption_4
dtype: string
splits:
- name: train
num_bytes: 826721431.0
num_examples: 6000
- name: validation
num_bytes: 138017615.0
num_examples: 1000
- name: test
num_bytes: 136871307.0
num_examples: 1000
download_size: 274629589
dataset_size: 1101610353.0
---
# Dataset Card for "flickr8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KShivendu/dbpedia-entities-openai-1M | 2023-07-07T08:35:48.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"region:us"
] | KShivendu | null | null | null | 6 | 344 | ---
license: mit
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: openai
sequence: float32
splits:
- name: train
num_bytes: 12383152
num_examples: 1000000
download_size: 12383152
dataset_size: 1000000
language:
- en
pretty_name: OpenAI 1M with DBPedia Entities
size_categories:
- 1M<n<10M
---
1M OpenAI Embeddings (1536 dimensions) from June 2023.
Text used for Embedding: title (string) + text (string)
First used for the pgvector vs VectorDB (Qdrant) benchmark: https://nirantk.com/writing/pgvector-vs-qdrant/
### Future work
We are planning to take this up to 10M (and possibly 100M) vectors. Contact [@KShivendu_](https://twitter.com/KShivendu_) on Twitter or mail to hello@nirantk.com if you want to help :)
### Credits:
This dataset was generated from the first 1M entries of https://huggingface.co/datasets/BeIR/dbpedia-entity |
result-kand2-sdxl-wuerst-karlo/7f0dfe44 | 2023-09-27T19:12:09.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 344 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 212
num_examples: 10
download_size: 1370
dataset_size: 212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7f0dfe44"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reddit_tifu | 2023-06-15T21:21:20.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"reddit-posts-summarization",
"arxiv:1811.00783",
"region:us"
] | null | Reddit dataset, where TIFU denotes the name of subbreddit /r/tifu.
As defined in the publication, styel "short" uses title as summary and
"long" uses tldr as summary.
Features includes:
- document: post text without tldr.
- tldr: tldr line.
- title: trimmed title without tldr.
- ups: upvotes.
- score: score.
- num_comments: number of comments.
- upvote_ratio: upvote ratio. | @misc{kim2018abstractive,
title={Abstractive Summarization of Reddit Posts with Multi-level Memory Networks},
author={Byeongchang Kim and Hyunwoo Kim and Gunhee Kim},
year={2018},
eprint={1811.00783},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 5 | 343 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Reddit TIFU
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: reddit-tifu
tags:
- reddit-posts-summarization
dataset_info:
- config_name: short
features:
- name: ups
dtype: float32
- name: num_comments
dtype: float32
- name: upvote_ratio
dtype: float32
- name: score
dtype: float32
- name: documents
dtype: string
- name: tldr
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 137715925
num_examples: 79740
download_size: 670607856
dataset_size: 137715925
- config_name: long
features:
- name: ups
dtype: float32
- name: num_comments
dtype: float32
- name: upvote_ratio
dtype: float32
- name: score
dtype: float32
- name: documents
dtype: string
- name: tldr
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 91984758
num_examples: 42139
download_size: 670607856
dataset_size: 91984758
---
# Dataset Card for "reddit_tifu"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/ctr4si/MMN](https://github.com/ctr4si/MMN)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.34 GB
- **Size of the generated dataset:** 229.76 MB
- **Total amount of disk used:** 1.57 GB
### Dataset Summary
Reddit dataset, where TIFU denotes the name of subbreddit /r/tifu.
As defined in the publication, style "short" uses title as summary and
"long" uses tldr as summary.
Features includes:
- document: post text without tldr.
- tldr: tldr line.
- title: trimmed title without tldr.
- ups: upvotes.
- score: score.
- num_comments: number of comments.
- upvote_ratio: upvote ratio.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### long
- **Size of downloaded dataset files:** 670.61 MB
- **Size of the generated dataset:** 92.00 MB
- **Total amount of disk used:** 762.62 MB
An example of 'train' looks as follows.
```
{'ups': 115.0,
'num_comments': 23.0,
'upvote_ratio': 0.88,
'score': 115.0,
'documents': 'this actually happened a couple of years ago. i grew up in germany where i went to a german secondary school that went from 5th to 13th grade (we still had 13 grades then, they have since changed that). my school was named after anne frank and we had a club that i was very active in from 9th grade on, which was dedicated to teaching incoming 5th graders about anne franks life, discrimination, anti-semitism, hitler, the third reich and that whole spiel. basically a day where the students\' classes are cancelled and instead we give them an interactive history and social studies class with lots of activities and games. \n\nthis was my last year at school and i already had a lot of experience doing these project days with the kids. i was running the thing with a friend, so it was just the two of us and 30-something 5th graders. we start off with a brief introduction and brainstorming: what do they know about anne frank and the third reich? you\'d be surprised how much they know. anyway after the brainstorming we do a few activities, and then we take a short break. after the break we split the class into two groups to make it easier to handle. one group watches a short movie about anne frank while the other gets a tour through our poster presentation that our student group has been perfecting over the years. then the groups switch. \n\ni\'m in the classroom to show my group the movie and i take attendance to make sure no one decided to run away during break. i\'m going down the list when i come to the name sandra (name changed). a kid with a boyish haircut and a somewhat deeper voice, wearing clothes from the boy\'s section at a big clothing chain in germany, pipes up. \n\nnow keep in mind, these are all 11 year olds, they are all pre-pubescent, their bodies are not yet showing any sex specific features one would be able to see while they are fully clothed (e.g. boobs, beards,...). this being a 5th grade in the rather conservative (for german standards) bavaria, i was confused. i looked down at the list again making sure i had read the name right. look back up at the kid. \n\nme: "you\'re sandra?"\n\nkid: "yep."\n\nme: "oh, sorry. *thinking the kid must be from somewhere where sandra is both a girl\'s and boy\'s name* where are you from? i\'ve only ever heard that as a girl\'s name before."\n\nthe class starts laughing. sandra gets really quiet. "i am a girl..." she says. some of the other students start saying that their parents made the same mistake when they met sandra. i feel so sorry and stupid. i get the class to calm down and finish taking attendance. we watch the movie in silence. after the movie, when we walked down to where the poster presentation took place i apologised to sandra. i felt so incredibly terrible, i still do to this day. throughout the rest of the day i heard lots of whispers about sandra. i tried to stop them whenever they came up, but there was no stopping the 5th grade gossip i had set in motion.\n\nsandra, if you\'re out there, i am so incredibly sorry for humiliating you in front of your class. i hope you are happy and healthy and continue to live your life the way you like. don\'t let anyone tell you you have to dress or act a certain way just because of the body parts you were born with. i\'m sorry if i made you feel like you were wrong for dressing and acting differently. i\'m sorry i probably made that day hell for you. i\'m sorry for my ignorance.',
'tldr': 'confuse a 5th grade girl for a boy in front of half of her class. kids are mean. sorry sandra.**',
'title': 'gender-stereotyping'}
```
#### short
- **Size of downloaded dataset files:** 670.61 MB
- **Size of the generated dataset:** 137.75 MB
- **Total amount of disk used:** 808.37 MB
An example of 'train' looks as follows.
```
{'ups': 50.0,
'num_comments': 13.0,
'upvote_ratio': 0.77,
'score': 50.0,
'documents': "i was on skype on my tablet as i went to the toilet iming a friend. i don't multitask very well, so i forgot one of the most important things to do before pooping. i think the best part was when i realised and told my mate who just freaked out because i was talking to him on the john!",
'tldr': '',
'title': 'forgetting to pull my underwear down before i pooped.'}
```
### Data Fields
The data fields are the same among all splits.
#### long
- `ups`: a `float32` feature.
- `num_comments`: a `float32` feature.
- `upvote_ratio`: a `float32` feature.
- `score`: a `float32` feature.
- `documents`: a `string` feature.
- `tldr`: a `string` feature.
- `title`: a `string` feature.
#### short
- `ups`: a `float32` feature.
- `num_comments`: a `float32` feature.
- `upvote_ratio`: a `float32` feature.
- `score`: a `float32` feature.
- `documents`: a `string` feature.
- `tldr`: a `string` feature.
- `title`: a `string` feature.
### Data Splits
|name |train|
|-----|----:|
|long |42139|
|short|79740|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
MIT License.
### Citation Information
```
@misc{kim2018abstractive,
title={Abstractive Summarization of Reddit Posts with Multi-level Memory Networks},
author={Byeongchang Kim and Hyunwoo Kim and Gunhee Kim},
year={2018},
eprint={1811.00783},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
result-kand2-sdxl-wuerst-karlo/196211bd | 2023-09-27T20:00:16.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 343 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 217
num_examples: 10
download_size: 1421
dataset_size: 217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "196211bd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.