id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SEACrowd/id_multilabel_hs | SEACrowd | 2023-09-26T12:29:46Z | 50 | 0 | null | [
"language:ind",
"aspect-based-sentiment-analysis",
"region:us"
] | 2023-09-26T12:29:46Z | 2023-09-26T11:13:49.000Z | 2023-09-26T11:13:49 | ---
tags:
- aspect-based-sentiment-analysis
language:
- ind
---
# id_multilabel_hs
The ID_MULTILABEL_HS dataset is collection of 13,169 tweets in Indonesian language,
designed for hate speech detection NLP task. This dataset is combination from previous research and newly crawled data from Twitter.
This is a multilabel dataset with label details as follows:
-HS : hate speech label;
-Abusive : abusive language label;
-HS_Individual : hate speech targeted to an individual;
-HS_Group : hate speech targeted to a group;
-HS_Religion : hate speech related to religion/creed;
-HS_Race : hate speech related to race/ethnicity;
-HS_Physical : hate speech related to physical/disability;
-HS_Gender : hate speech related to gender/sexual orientation;
-HS_Gender : hate related to other invective/slander;
-HS_Weak : weak hate speech;
-HS_Moderate : moderate hate speech;
-HS_Strong : strong hate speech.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{ibrohim-budi-2019-multi,
title = "Multi-label Hate Speech and Abusive Language Detection in {I}ndonesian {T}witter",
author = "Ibrohim, Muhammad Okky and
Budi, Indra",
booktitle = "Proceedings of the Third Workshop on Abusive Language Online",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-3506",
doi = "10.18653/v1/W19-3506",
pages = "46--57",
}
```
## License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
## Homepage
[https://aclanthology.org/W19-3506/](https://aclanthology.org/W19-3506/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.4915729761123657,
-0.6278833746910095,
-0.1933390349149704,
0.3479922413825989,
-0.24173134565353394,
0.258670449256897,
-0.14510823786258698,
-0.4144258201122284,
0.4817808270454407,
0.4629691541194916,
-0.42494654655456543,
-0.869274377822876,
-0.9309741854667664,
0.2631898522377014,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tessiw/German_GuanacoDataset | tessiw | 2023-09-26T12:52:18Z | 50 | 1 | null | [
"task_categories:conversational",
"language:de",
"region:us"
] | 2023-09-26T12:52:18Z | 2023-09-26T12:34:29.000Z | 2023-09-26T12:34:29 | ---
language:
- de
task_categories:
- conversational
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77973314
num_examples: 139476
download_size: 40038214
dataset_size: 77973314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is a subset of the [JosephusCheung/GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset/viewer/default/train?p=11736) dataset, where only german samples were selected as well as formated with the following template for the chat models:
```<s>[INST] User prompt [/INST] Model answer </s>```
| [
-0.25764206051826477,
-0.8270015120506287,
0.21678900718688965,
0.12330912798643112,
-0.32599934935569763,
-0.016231954097747803,
-0.04688815400004387,
0.040053144097328186,
0.9365909099578857,
0.7073410749435425,
-1.25664222240448,
-0.6148648858070374,
-0.3676561117172241,
-0.122906789183... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ykumards/open-i | ykumards | 2023-09-27T11:54:03Z | 50 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nd-4.0",
"region:us"
] | 2023-09-27T11:54:03Z | 2023-09-27T11:35:12.000Z | 2023-09-27T11:35:12 | ---
dataset_info:
features:
- name: uid
dtype: int64
- name: MeSH
dtype: string
- name: Problems
dtype: string
- name: image
dtype: string
- name: indication
dtype: string
- name: comparison
dtype: string
- name: findings
dtype: string
- name: impression
dtype: string
- name: img_frontal
dtype: binary
- name: img_lateral
dtype: binary
splits:
- name: train
num_bytes: 2104109741
num_examples: 3851
download_size: 2095869611
dataset_size: 2104109741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nd-4.0
language:
- en
pretty_name: Chest X-rays (Indiana University)
size_categories:
- 1K<n<10K
---
# Chest X-rays (Indiana University)
Copy of the kaggle dataset: https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university created by [raddar](https://www.kaggle.com/raddar)
---
Open access chest X-ray collection from Indiana University
Original source: https://openi.nlm.nih.gov/
Original images were downloaded in raw DICOM standard. Each image was converted to png using some post-processing:
top/bottom 0.5% DICOM pixel values were clipped (to eliminate very dark or very bright pixel outliers)
DICOM pixel values scaled linearly to fit into 0-255 range
resized to 2048 on shorter side (to fit in Kaggle dataset limits)
Metadata downloaded using available API (https://openi.nlm.nih.gov/services#searchAPIUsingGET)
Each image classified manually into frontal and lateral chest X-ray categories.
License: [Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ](https://creativecommons.org/licenses/by-nc-nd/4.0/)
---
### Usage
The lateral and frontal images of each uid are grouped together in an example. The images are stored as bytes, and can be loaded to PIL Image using the following method
```
def load_image_from_byte_array(byte_array):
return Image.open(io.BytesIO(byte_array))
```
### Cite
Please site the original [source of the dataset](https://openi.nlm.nih.gov/).
```
@article{demner2016preparing,
title={Preparing a collection of radiology examinations for distribution and retrieval},
author={Demner-Fushman, Dina and Kohli, Marc D and Rosenman, Marc B and Shooshan, Sonya E and Rodriguez, Laritza and Antani, Sameer and Thoma, George R and McDonald, Clement J},
journal={Journal of the American Medical Informatics Association},
volume={23},
number={2},
pages={304--310},
year={2016},
publisher={Oxford University Press}
}
```
| [
-0.4759058356285095,
-0.31056544184684753,
0.4360266327857971,
0.03158462792634964,
-0.38353437185287476,
-0.29400336742401123,
0.14209486544132233,
-0.2893809974193573,
0.26241931319236755,
0.6637449264526367,
-0.5472515225410461,
-0.5876286625862122,
-0.5502809882164001,
0.14101219177246... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philschmid/markdown-documentation-transformers | philschmid | 2023-10-05T13:42:59Z | 50 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-05T13:42:59Z | 2023-10-05T13:38:10.000Z | 2023-10-05T13:38:10 | ---
license: apache-2.0
---
# Hugging Face Transformers documentation as markdown dataset
This dataset was created using [Clipper.js](https://github.com/philschmid/clipper.js). Clipper is a Node.js command line tool that allows you to easily clip content from web pages and convert it to Markdown. It uses Mozilla's Readability library and Turndown under the hood to parse web page content and convert it to Markdown.
This dataset can be used to create RAG applications, which want to use the transformers documentation.
Example document: https://huggingface.co/docs/transformers/peft
```
# Load adapters with 🤗 PEFT
[Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model.
Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them.

The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.
If you’re interested in learning more about the 🤗 PEFT library, check out the [documentation](https://huggingface.co/docs/peft/index).
## Setup
Get started by installing 🤗 PEFT:
If you want to try out the brand new features, you might be interested in installing the library from source:
....
``` | [
-0.8227362036705017,
-0.4621904194355011,
0.30699172616004944,
0.1754334419965744,
-0.11392771452665329,
0.053006891161203384,
-0.11319976300001144,
-0.31160858273506165,
0.42659708857536316,
0.6850295066833496,
-0.6982780694961548,
-0.3144717812538147,
-0.5005097389221191,
-0.118114538490... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gbarone77/camoscio_llama2 | gbarone77 | 2023-10-06T09:41:52Z | 50 | 0 | null | [
"region:us"
] | 2023-10-06T09:41:52Z | 2023-10-06T09:37:42.000Z | 2023-10-06T09:37:42 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
meta-math/MetaMathQA-40K | meta-math | 2023-11-10T01:42:51Z | 50 | 5 | null | [
"license:mit",
"arxiv:2309.12284",
"region:us"
] | 2023-11-10T01:42:51Z | 2023-10-07T14:47:58.000Z | 2023-10-07T14:47:58 | ---
license: mit
---
arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/ | [
-0.7626900672912598,
-0.3629370629787445,
0.44726452231407166,
0.11818762123584747,
-0.23092754185199738,
0.07009516656398773,
0.13962595164775848,
-0.1645512729883194,
0.88106369972229,
0.8391180038452148,
-0.7729040384292603,
-0.6759433150291443,
-0.03133334964513779,
0.02232918702065944... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
famepram/llama-2-jk48-demo | famepram | 2023-11-08T04:14:12Z | 50 | 0 | null | [
"license:other",
"region:us"
] | 2023-11-08T04:14:12Z | 2023-10-07T16:45:09.000Z | 2023-10-07T16:45:09 | ---
license: other
license_name: readme.md
license_link: LICENSE
---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
# Dataset Card for "Llama-2-JKT48-FP"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on JKT$* knowledges.
The dataset is created for exercising training llama2. | [
-0.18346823751926422,
-0.251319020986557,
-0.05345825478434563,
0.1608770787715912,
-0.7605008482933044,
0.45671355724334717,
0.12649033963680267,
-0.1902228444814682,
0.22815291583538055,
0.48428723216056824,
-0.5905144810676575,
-0.5892179012298584,
-0.7475924491882324,
-0.07259107381105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
promptora11/finetune | promptora11 | 2023-10-09T12:38:42Z | 50 | 0 | null | [
"region:us"
] | 2023-10-09T12:38:42Z | 2023-10-09T12:33:06.000Z | 2023-10-09T12:33:06 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 74044
num_examples: 100
download_size: 7883
dataset_size: 74044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8228302001953125,
-0.38064709305763245,
0.1638476699590683,
0.04820871353149414,
-0.25546926259994507,
-0.3691510558128357,
0.05544900894165039,
-0.1491510421037674,
0.7938298583030701,
0.5317284464836121,
-0.8582543730735779,
-0.6409620642662048,
-0.4979695677757263,
-0.262046754360199... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hongerzh/my-NFT-classifier-dataset2 | hongerzh | 2023-10-11T17:28:24Z | 50 | 0 | null | [
"region:us"
] | 2023-10-11T17:28:24Z | 2023-10-11T16:25:06.000Z | 2023-10-11T16:25:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': notsale
'1': sale
splits:
- name: train
num_bytes: 6616368416.534
num_examples: 28754
- name: validation
num_bytes: 2371905486.911
num_examples: 9593
- name: test
num_bytes: 2720948631.066
num_examples: 9569
download_size: 8806005035
dataset_size: 11709222534.511
---
# Dataset Card for "my-NFT-classifier-dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6085724234580994,
-0.2344532012939453,
0.09110236167907715,
0.16766022145748138,
-0.12287642061710358,
0.1161530464887619,
0.3657459318637848,
-0.297494113445282,
0.6884471774101257,
0.39580097794532776,
-0.6755231022834778,
-0.21738268435001373,
-0.6137818694114685,
-0.1393112540245056... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/PubmedSumm_test | hippocrates | 2023-10-17T19:54:03Z | 50 | 0 | null | [
"region:us"
] | 2023-10-17T19:54:03Z | 2023-10-13T10:51:14.000Z | 2023-10-13T10:51:14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2293491121
num_examples: 119924
- name: valid
num_bytes: 129680450
num_examples: 6633
- name: test
num_bytes: 129463253
num_examples: 6658
download_size: 1172343963
dataset_size: 2552634824
---
# Dataset Card for "PubmedSumm_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4292760491371155,
-0.2502864897251129,
0.30358263850212097,
0.16876175999641418,
-0.262527197599411,
-0.12658050656318665,
0.19129468500614166,
-0.01351762842386961,
0.8545579314231873,
0.560479998588562,
-0.7008746862411499,
-0.7773017287254333,
-0.6255815029144287,
-0.0089993020519614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/2018n2c2_RE_test | hippocrates | 2023-10-16T18:12:07Z | 50 | 0 | null | [
"region:us"
] | 2023-10-16T18:12:07Z | 2023-10-16T18:12:04.000Z | 2023-10-16T18:12:04 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SalomonMetre13/nnd_fr_14k | SalomonMetre13 | 2023-10-27T09:10:40Z | 50 | 0 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:nnd",
"license:mit",
"region:us"
] | 2023-10-27T09:10:40Z | 2023-10-22T08:12:12.000Z | 2023-10-22T08:12:12 | ---
license: mit
language:
- nnd
task_categories:
- translation
size_categories:
- 10K<n<100K
---
This <span style="color:teal;">parallel corpus </span> contains <span style="color:teal;">14,478</span> aligned sentence pairs <span style="color:teal;">Nande-French</span> in a <span style="color:teal;">90:10</span> split for the train and the test sets. It has been mainly used to fine-tune the <span style="color:teal;"> t5-base </span> pretrained model for the development of <a href="https://huggingface.co/SalomonMetre13/nnd_fr_mt" style="color:green;">this translation model </a> | [
-0.4639682173728943,
-0.5957061052322388,
0.4281097650527954,
0.7870616912841797,
-0.40383702516555786,
0.25598204135894775,
-0.28389713168144226,
-0.2295939028263092,
0.30055171251296997,
-0.007540830876678228,
-0.5177103877067566,
-0.5536156892776489,
-0.6629605293273926,
0.3443320989608... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/micropile | andersonbcdefg | 2023-10-23T17:38:01Z | 50 | 0 | null | [
"region:us"
] | 2023-10-23T17:38:01Z | 2023-10-23T17:37:57.000Z | 2023-10-23T17:37:57 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __id
dtype: int64
splits:
- name: train
num_bytes: 5544284
num_examples: 1000
download_size: 2933209
dataset_size: 5544284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "micropile"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7202737927436829,
-0.22751900553703308,
0.2621728479862213,
0.11839482188224792,
0.0028214596677571535,
0.11931557208299637,
0.2521759867668152,
0.06496315449476242,
0.9040467739105225,
0.41397154331207275,
-0.7807052731513977,
-0.7626309394836426,
-0.48621344566345215,
-0.4728356301784... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bengisucam/LLL_INO-tagged | bengisucam | 2023-11-09T17:54:13Z | 50 | 0 | null | [
"region:us"
] | 2023-11-09T17:54:13Z | 2023-11-09T17:39:58.000Z | 2023-11-09T17:39:58 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ajibawa-2023/Python-Code-23k-ShareGPT | ajibawa-2023 | 2023-11-11T12:27:43Z | 50 | 11 | null | [
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-11-11T12:27:43Z | 2023-11-10T12:11:19.000Z | 2023-11-10T12:11:19 | ---
license: cc-by-nc-nd-4.0
language:
- en
---
This dataset is in Vicuna/ShareGPT format. There are 23000+ set of conversations. Each set having 2 conversations.
Along with the Python code detailed explanation is provided.
This dataset was generated using GPT-3.5, GPT-4 etc. | [
-0.374011367559433,
-0.9804115891456604,
0.16759957373142242,
0.533781886100769,
-0.12258504331111908,
-0.424845814704895,
0.11618591099977493,
-0.24925990402698517,
0.2353818267583847,
0.8030855059623718,
-0.7935972213745117,
-0.37383851408958435,
-0.30714255571365356,
-0.0827364176511764... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gianma/camera_cleaned_8192_174 | gianma | 2023-11-11T18:59:27Z | 50 | 0 | null | [
"region:us"
] | 2023-11-11T18:59:27Z | 2023-11-11T18:59:20.000Z | 2023-11-11T18:59:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: is_camera
dtype: bool
- name: reference
dtype: string
- name: summary
dtype: string
- name: tokenized_len_total
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1549422
num_examples: 97
- name: validation
num_bytes: 103831
num_examples: 6
- name: test
num_bytes: 107835
num_examples: 6
download_size: 728059
dataset_size: 1761088
---
# Dataset Card for "camera_cleaned_8192_174"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6044971942901611,
0.0511469766497612,
0.34688037633895874,
-0.06225913390517235,
-0.4860619604587555,
-0.1674361377954483,
0.32829591631889343,
-0.16756924986839294,
0.7076738476753235,
0.5221664905548096,
-0.7366922497749329,
-0.47702494263648987,
-0.6349955201148987,
-0.10355125367641... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinphan3110/ethics_legal_concept_llama2_chat | justinphan3110 | 2023-11-13T05:50:20Z | 50 | 0 | null | [
"region:us"
] | 2023-11-13T05:50:20Z | 2023-11-13T05:50:19.000Z | 2023-11-13T05:50:19 | ---
dataset_info:
features:
- name: sentence
sequence: string
- name: label
sequence: bool
splits:
- name: train
num_bytes: 147896
num_examples: 100
download_size: 25209
dataset_size: 147896
---
# Dataset Card for "ethics_legal_concept_llama2_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.06195656582713127,
-0.28299054503440857,
0.38052400946617126,
0.2843189239501953,
-0.6031351685523987,
0.1136472150683403,
0.23128058016300201,
-0.3331545889377594,
0.7036502957344055,
0.6693839430809021,
-0.6142239570617676,
-0.7944730520248413,
-0.6268296241760254,
-0.3874526619911194... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philschmid/guanaco-sharegpt-style | philschmid | 2023-11-13T13:24:10Z | 50 | 0 | null | [
"region:us"
] | 2023-11-13T13:24:10Z | 2023-11-13T13:23:37.000Z | 2023-11-13T13:23:37 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 13979844
num_examples: 9033
download_size: 8238076
dataset_size: 13979844
---
# Dataset Card for "guanaco-sharegpt-style"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.521929919719696,
-0.33578649163246155,
0.17043672502040863,
0.47184231877326965,
-0.3063178062438965,
-0.052627161145210266,
0.06604082137346268,
-0.19884248077869415,
1.0088894367218018,
0.2071114182472229,
-0.8317847847938538,
-0.9700176119804382,
-0.7301415801048279,
-0.3956205844879... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
facebook/emu_edit_test_set | facebook | 2023-11-19T07:37:12Z | 50 | 27 | null | [
"region:us"
] | 2023-11-19T07:37:12Z | 2023-11-13T16:52:43.000Z | 2023-11-13T16:52:43 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: image
dtype: image
- name: task
dtype: string
- name: split
dtype: string
- name: idx
dtype: int64
- name: hash
dtype: string
- name: input_caption
dtype: string
- name: output_caption
dtype: string
splits:
- name: validation
num_bytes: 766327032.29
num_examples: 2022
- name: test
num_bytes: 1353530752.0
num_examples: 3589
download_size: 1904598290
dataset_size: 2119857784.29
---
# Dataset Card for the Emu Edit Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage: https://emu-edit.metademolab.com/**
- **Paper: https://emu-edit.metademolab.com/assets/emu_edit.pdf**
### Dataset Summary
To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).
Then, we utilize the diverse set of input images from the [MagicBrush benchmark](https://huggingface.co/datasets/osunlp/MagicBrush), and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.
Moreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.
Finally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example.
When doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.
Additionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set [here](https://huggingface.co/datasets/facebook/emu_edit_test_set_generations).
For more details please see our [paper](https://emu-edit.metademolab.com/assets/emu_edit.pdf) and [project page](https://emu-edit.metademolab.com/).
### Licensing Information
Licensed with CC-BY-NC 4.0 License available [here](https://creativecommons.org/licenses/by-nc/4.0/legalcode?fbclid=IwAR2SYZjLRywwUMblkWg0LyAxHVVTloIFlvC-ju3BthIYtOM2jpQHgbeXOsM).
### Citation Information
```
@inproceedings{Sheynin2023EmuEP,
title={Emu Edit: Precise Image Editing via Recognition and Generation Tasks},
author={Shelly Sheynin and Adam Polyak and Uriel Singer and Yuval Kirstain and Amit Zohar and Oron Ashual and Devi Parikh and Yaniv Taigman},
year={2023},
url={https://api.semanticscholar.org/CorpusID:265221391}
}
``` | [
-0.9309884905815125,
-0.9677913784980774,
0.30367687344551086,
-0.0327082984149456,
-0.05110416188836098,
-0.2135002464056015,
-0.23319709300994873,
-0.4491507411003113,
0.058780938386917114,
0.3147900104522705,
-0.6931902170181274,
-0.5430288910865784,
-0.44496995210647583,
0.545731186866... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/HoC_test | hippocrates | 2023-11-14T16:06:18Z | 50 | 0 | null | [
"region:us"
] | 2023-11-14T16:06:18Z | 2023-11-14T16:06:15.000Z | 2023-11-14T16:06:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: train
num_bytes: 3062367
num_examples: 1108
- name: valid
num_bytes: 430760
num_examples: 157
- name: test
num_bytes: 863678
num_examples: 315
download_size: 1496963
dataset_size: 4356805
---
# Dataset Card for "HoC_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8053320646286011,
-0.6438663601875305,
0.054803356528282166,
0.146774023771286,
-0.29670315980911255,
-0.2745112180709839,
0.37895163893699646,
-0.2890681028366089,
0.7097592949867249,
0.3528081774711609,
-0.8217690587043762,
-0.779434084892273,
-0.3870745897293091,
-0.12741759419441223... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_division_whole | jlbaker361 | 2023-11-17T05:53:41Z | 50 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:41Z | 2023-11-17T04:47:34.000Z | 2023-11-17T04:47:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1231.111111111111
num_examples: 32
- name: test
num_bytes: 153.88888888888889
num_examples: 4
download_size: 4157
dataset_size: 1385.0
---
# Dataset Card for "small_division_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6158143877983093,
-0.4012046158313751,
0.3723752498626709,
0.2342289835214615,
-0.47328585386276245,
-0.20705413818359375,
0.15866883099079132,
0.024895036593079567,
1.0414032936096191,
0.3739515244960785,
-0.9086466431617737,
-0.6778810620307922,
-0.5264955759048462,
-0.249483585357666... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PaulLoisel/wcr_base | PaulLoisel | 2023-11-20T17:33:11Z | 50 | 0 | null | [
"region:us"
] | 2023-11-20T17:33:11Z | 2023-11-20T17:33:06.000Z | 2023-11-20T17:33:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: Age
dtype: int64
- name: Title
dtype: string
- name: Review Text
dtype: string
- name: Rating
dtype: int64
- name: label
dtype: int64
- name: Positive Feedback Count
dtype: int64
- name: Division Name
dtype: string
- name: Department Name
dtype: string
- name: Class Name
dtype: string
splits:
- name: train
num_bytes: 145123.3
num_examples: 350
- name: test
num_bytes: 31097.85
num_examples: 75
- name: val
num_bytes: 31097.85
num_examples: 75
download_size: 129979
dataset_size: 207319.0
---
# Dataset Card for "wcr_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5907537937164307,
-0.046406444162130356,
0.09674348682165146,
0.2924972474575043,
-0.3143041729927063,
0.045542020350694656,
0.35543516278266907,
-0.14657080173492432,
0.47739148139953613,
0.5848845839500427,
-0.7265614867210388,
-0.7466005682945251,
-0.6896297931671143,
-0.262239009141... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deokhk/zh_wiki_sentences_100000 | deokhk | 2023-11-21T07:38:13Z | 50 | 0 | null | [
"region:us"
] | 2023-11-21T07:38:13Z | 2023-11-21T07:38:08.000Z | 2023-11-21T07:38:08 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 13169192
num_examples: 100000
- name: dev
num_bytes: 131900
num_examples: 1000
download_size: 9348002
dataset_size: 13301092
---
# Dataset Card for "zh_wiki_sentences_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6005434393882751,
-0.36445698142051697,
0.3269902169704437,
0.25217917561531067,
-0.2245241403579712,
-0.4713442921638489,
-0.014687621034681797,
-0.10495998710393906,
0.7382227182388306,
0.5325949788093567,
-0.9277271032333374,
-0.7041006684303284,
-0.34504711627960205,
0.0380268134176... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Innovina/Test2 | Innovina | 2023-11-28T09:25:24Z | 50 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:mit",
"code",
"region:us"
] | 2023-11-28T09:25:24Z | 2023-11-23T14:05:06.000Z | 2023-11-23T14:05:06 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: test
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
confit/gtzan | confit | 2023-11-24T10:20:07Z | 50 | 0 | null | [
"region:us"
] | 2023-11-24T10:20:07Z | 2023-11-24T09:32:29.000Z | 2023-11-24T09:32:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': blues
'1': classical
'2': country
'3': disco
'4': hiphop
'5': jazz
'6': metal
'7': pop
'8': reggae
'9': rock
splits:
- name: train
num_bytes: 17683
num_examples: 443
- name: validation
num_bytes: 7871
num_examples: 197
- name: test
num_bytes: 11546
num_examples: 290
download_size: 10908
dataset_size: 37100
---
# Dataset Card for "gtzan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5733677744865417,
-0.2387782633304596,
0.4321751892566681,
0.2111096978187561,
-0.20063157379627228,
-0.08195317536592484,
0.05734911933541298,
-0.2714576721191406,
0.8226653337478638,
0.46572020649909973,
-1.101388931274414,
-0.8898378610610962,
-0.4106867015361786,
-0.4042168259620666... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rony/soccer-dialogues | rony | 2022-07-25T08:33:15Z | 49 | 0 | null | [
"region:us"
] | 2022-07-25T08:33:15Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_reviews_multi_fr | SetFit | 2022-03-23T15:45:44Z | 49 | 0 | null | [
"region:us"
] | 2022-03-23T15:45:44Z | 2022-03-13T02:48:20.000Z | 2022-03-13T02:48:20 | #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [
-0.8210474252700806,
-0.451740562915802,
0.008561830967664719,
0.7448611855506897,
-0.15750540792942047,
0.08232040703296661,
0.012341687455773354,
-0.4625762104988098,
0.6683513522148132,
1.1220237016677856,
-0.9679341912269592,
-0.33486074209213257,
-0.2548729181289673,
0.355095714330673... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gdwangh/kaggle-nlp-getting-start | gdwangh | 2022-04-09T08:13:03Z | 49 | 0 | null | [
"region:us"
] | 2022-04-09T08:13:03Z | 2022-04-09T08:03:46.000Z | 2022-04-09T08:03:46 | Dataset Summary
- Natural Language Processing with Disaster Tweets: https://www.kaggle.com/competitions/nlp-getting-started/data
- This particular challenge is perfect for data scientists looking to get started with Natural Language Processing. The competition dataset is not too big, and even if you don’t have much personal computing power, you can do all of the work in our free, no-setup, Jupyter Notebooks environment called Kaggle Notebooks.
Columns
- id - a unique identifier for each tweet
- text - the text of the tweet
- location - the location the tweet was sent from (may be blank)
- keyword - a particular keyword from the tweet (may be blank)
- target - in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
| [
-0.07667702436447144,
-0.7205347418785095,
0.42146289348602295,
0.5707447528839111,
-0.12617361545562744,
0.6253471970558167,
-0.22415190935134888,
-0.32184696197509766,
0.32136058807373047,
0.5294914841651917,
-0.7429713606834412,
-0.6204068660736084,
-0.8072955012321472,
0.24921202659606... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raquiba/Sarcasm_News_Headline | raquiba | 2022-04-14T08:19:08Z | 49 | 2 | null | [
"region:us"
] | 2022-04-14T08:19:08Z | 2022-04-12T03:50:36.000Z | 2022-04-12T03:50:36 | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. | [
-0.23536552488803864,
-0.6626030206680298,
0.21762587130069733,
0.656610906124115,
-0.424947053194046,
-0.14933866262435913,
-0.3066434860229492,
-0.4949658215045929,
0.5321385264396667,
0.3658003509044647,
-0.5483190417289734,
-0.7136632800102234,
-0.3511659502983093,
0.21753910183906555,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
osyvokon/pavlick-formality-scores | osyvokon | 2023-11-24T16:41:26Z | 49 | 1 | null | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2023-11-24T16:41:26Z | 2022-04-27T15:28:07.000Z | 2022-04-27T15:28:07 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
pretty_name: 'Sentence-level formality annotations for news, blogs, email and QA forums.
Published in "An Empirical Analysis of Formality in Online Communication" (Pavlick
and Tetreault, 2016) '
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
---
This dataset contains sentence-level formality annotations used in the 2016
TACL paper "An Empirical Analysis of Formality in Online Communication"
(Pavlick and Tetreault, 2016). It includes sentences from four genres (news,
blogs, email, and QA forums), all annotated by humans on Amazon Mechanical
Turk. The news and blog data was collected by Shibamouli Lahiri, and we are
redistributing it here for the convenience of other researchers. We collected
the email and answers data ourselves, using a similar annotation setup to
Shibamouli.
In the original dataset, `answers` and `email` were tokenized. In this version,
Oleksiy Syvokon detokenized them with `moses-detokenizer` and a bunch of
additional regexps.
If you use this data in your work, please cite BOTH of the below papers:
```
@article{PavlickAndTetreault-2016:TACL,
author = {Ellie Pavlick and Joel Tetreault},
title = {An Empirical Analysis of Formality in Online Communication},
journal = {Transactions of the Association for Computational Linguistics},
year = {2016},
publisher = {Association for Computational Linguistics}
}
@article{Lahiri-2015:arXiv,
title={{SQUINKY! A} Corpus of Sentence-level Formality, Informativeness, and Implicature},
author={Lahiri, Shibamouli},
journal={arXiv preprint arXiv:1506.02306},
year={2015}
}
```
## Contents
The annotated data files and number of lines in each are as follows:
* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: https://answers.yahoo.com/
* 1821 blog -- Annotated sentences from the top 100 blogs listed on http://technorati.com/ on October 31, 2009.
* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: http://americanbridgepac.org/jeb-bushs-gubernatorial-email-archive/
* 2775 news -- Annotated sentences from the "breaking", "recent", and "local" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), news.com.au, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.
## Format
Each record contains the following fields:
1. `avg_score`: the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences
2. `sentence`
| [
-0.48575806617736816,
-1.2212920188903809,
0.4189665913581848,
0.27752071619033813,
-0.26272842288017273,
-0.22662602365016937,
-0.3620695471763611,
-0.8400697708129883,
0.2758713662624359,
0.4115331768989563,
-0.3821971118450165,
-0.6576987504959106,
-0.41849014163017273,
0.54059207439422... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ett | null | 2022-11-18T22:07:07Z | 49 | 3 | null | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:... | 2022-11-18T22:07:07Z | 2022-05-05T12:12:41.000Z | 2022-05-05T12:12:41 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Electricity Transformer Temperature
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
dataset_info:
- config_name: h1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2589657
dataset_size: 111667018
- config_name: h2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2417960
dataset_size: 111667018
- config_name: m1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 10360719
dataset_size: 1782066298
- config_name: m2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 9677236
dataset_size: 1782066298
---
# Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset)
- **Repository:** https://github.com/zhouhaoyi/ETDataset
- **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436)
- **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Dataset Summary
The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup:
* **H**igh **U**se**F**ul **L**oad
* **H**igh **U**se**L**ess **L**oad
* **M**iddle **U**se**F**ul **L**oad
* **M**iddle **U**se**L**ess **L**oad
* **L**ow **U**se**F**ul **L**oad
* **L**ow **U**se**L**ess **L**oad
### Dataset Usage
To load a particular variant of the dataset just specify its name e.g:
```python
load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer
```
or to specify a prediction length:
```python
load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours)
```
### Supported Tasks and Leaderboards
The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': 'OT'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The time series data is split into train/val/test set of 12/4/4 months respectively.
## Dataset Creation
### Curation Rationale
Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. | [
-0.5542077422142029,
-0.5029224157333374,
0.1110188439488411,
0.22797797620296478,
-0.17433501780033112,
0.11420414596796036,
-0.21781590580940247,
-0.35154375433921814,
0.1394060105085373,
0.29166194796562195,
-0.7497151494026184,
-0.45367076992988586,
-0.43593519926071167,
0.095262475311... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PiC/phrase_similarity | PiC | 2023-01-20T16:32:19Z | 49 | 6 | phrase-in-context | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"l... | 2023-01-20T16:32:19Z | 2022-06-14T01:35:19.000Z | 2022-06-14T01:35:19 | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: phrase-in-context
pretty_name: 'PiC: Phrase Similarity (PS)'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "PiC: Phrase Similarity"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/)
- **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>)
- **Size of downloaded dataset files:** 4.60 MB
- **Size of the generated dataset:** 2.96 MB
- **Total amount of disk used:** 7.56 MB
### Dataset Summary
PS is a binary classification task with the goal of predicting whether two multi-word noun phrases are semantically similar or not given *the same context* sentence.
This dataset contains ~10K pairs of two phrases along with their contexts used for disambiguation, since two phrases are not enough for semantic comparison.
Our ~10K examples were annotated by linguistic experts on <upwork.com> and verified in two rounds by 1000 Mturkers and 5 linguistic experts.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
**PS**
* Size of downloaded dataset files: 4.60 MB
* Size of the generated dataset: 2.96 MB
* Total amount of disk used: 7.56 MB
```
{
"phrase1": "annual run",
"phrase2": "yearlong performance",
"sentence1": "since 2004, the club has been a sponsor of the annual run for rigby to raise money for off-campus housing safety awareness.",
"sentence2": "since 2004, the club has been a sponsor of the yearlong performance for rigby to raise money for off-campus housing safety awareness.",
"label": 0,
"idx": 0,
}
```
### Data Fields
The data fields are the same among all splits.
* phrase1: a string feature.
* phrase2: a string feature.
* sentence1: a string feature.
* sentence2: a string feature.
* label: a classification label, with negative (0) and positive (1).
* idx: an int32 feature.
### Data Splits
| name |train |validation|test |
|--------------------|----:|--------:|----:|
|PS |7362| 1052|2102|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com).
#### Who are the source language producers?
We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
13 linguistic experts from [Upwork.com](https://upwork.com).
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is a joint work between Adobe Research and Auburn University.
Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me).
[@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace.
### Licensing Information
This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
}
``` | [
-0.3662527799606323,
-0.6806850433349609,
0.16328781843185425,
0.25437527894973755,
-0.4102528393268585,
-0.029269998893141747,
-0.29426684975624084,
-0.5193867087364197,
0.5000818967819214,
0.3554080128669739,
-0.48175549507141113,
-0.8902931809425354,
-0.5117248296737671,
0.2466324418783... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vesteinn/sosialurin-faroese-pos | vesteinn | 2022-06-16T15:49:46Z | 49 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-06-16T15:49:46Z | 2022-06-16T15:44:25.000Z | 2022-06-16T15:44:25 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
loubnabnl/github-code-small-filtering | loubnabnl | 2022-07-11T23:18:05Z | 49 | 0 | null | [
"region:us"
] | 2022-07-11T23:18:05Z | 2022-07-11T21:13:50.000Z | 2022-07-11T21:13:50 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
viola77data/recycling-dataset | viola77data | 2022-09-13T13:17:15Z | 49 | 5 | null | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"recycling",
"image-classification",
"region:us"
] | 2022-09-13T13:17:15Z | 2022-09-13T12:43:15.000Z | 2022-09-13T12:43:15 | ---
annotations_creators: []
language:
- en
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: recycling-dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- recycling
- image-classification
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for recycling-dataset
### Dataset Summary
This is a recycling dataset that can be used for image classification. It has 11 categories:
- aluminium
- batteries
- cardboard
- disposable plates
- glass
- hard plastic
- paper
- paper towel
- polystyrene
- soft plastics
- takeaway cups
It was scrapped from DuckDuckGo using this tool: https://pypi.org/project/jmd-imagescraper/
| [
-0.6429786682128906,
0.1292824149131775,
0.07723426073789597,
-0.35239508748054504,
-0.3515804409980774,
0.07481136173009872,
0.45893287658691406,
0.29735860228538513,
0.16223740577697754,
0.5778309106826782,
-0.4683198034763336,
-0.3790856897830963,
-0.2915903329849243,
0.2285462021827697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sileod/probability_words_nli | sileod | 2023-09-06T14:56:43Z | 49 | 4 | null | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"langu... | 2023-09-06T14:56:43Z | 2022-11-03T14:21:14.000Z | 2022-11-03T14:21:14 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'probability_words_nli'
paperwithcoode_id: probability-words-nli
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
- natural-language-inference
- multi-input-text-classification
tags:
- wep
- words of estimative probability
- probability
- logical reasoning
- soft logic
- nli
- verbal probabilities
- natural-language-inference
- reasoning
- logic
train-eval-index:
- config: usnli
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-1hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-2hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
---
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words).
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
# Citation
https://arxiv.org/abs/2211.03358
```bib
@inproceedings{sileo-moens-2023-probing,
title = "Probing neural language models for understanding of words of estimative probability",
author = "Sileo, Damien and
Moens, Marie-francine",
booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.starsem-1.41",
doi = "10.18653/v1/2023.starsem-1.41",
pages = "469--476",
}
```
| [
-0.32028353214263916,
-0.7960188388824463,
0.40315505862236023,
0.03363529592752457,
-0.15246863663196564,
-0.3052002787590027,
-0.3957430422306061,
-0.3392852246761322,
-0.09617359191179276,
0.35776254534721375,
-0.56282639503479,
-0.5313106775283813,
-0.34891200065612793,
0.1422469764947... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/mirna | bigbio | 2022-12-22T15:45:38Z | 49 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | 2022-12-22T15:45:38Z | 2022-11-13T22:10:00.000Z | 2022-11-13T22:10:00 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_3p0
pretty_name: miRNA
homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for miRNA
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The corpus consists of 301 Medline citations. The documents were screened for
mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually
annotated. The corpus comprises of two separate files, a train and a test set, coming
from 201 and 100 documents respectively.
## Citation Information
```
@Article{Bagewadi2014,
author={Bagewadi, Shweta
and Bobi{'{c}}, Tamara
and Hofmann-Apitius, Martin
and Fluck, Juliane
and Klinger, Roman},
title={Detecting miRNA Mentions and Relations in Biomedical Literature},
journal={F1000Research},
year={2014},
month={Aug},
day={28},
publisher={F1000Research},
volume={3},
pages={205-205},
keywords={MicroRNAs; corpus; prediction algorithms},
abstract={
INTRODUCTION: MicroRNAs (miRNAs) have demonstrated their potential as post-transcriptional
gene expression regulators, participating in a wide spectrum of regulatory events such as
apoptosis, differentiation, and stress response. Apart from the role of miRNAs in normal
physiology, their dysregulation is implicated in a vast array of diseases. Dissection of
miRNA-related associations are valuable for contemplating their mechanism in diseases,
leading to the discovery of novel miRNAs for disease prognosis, diagnosis, and therapy.
MOTIVATION: Apart from databases and prediction tools, miRNA-related information is largely
available as unstructured text. Manual retrieval of these associations can be labor-intensive
due to steadily growing number of publications. Additionally, most of the published miRNA
entity recognition methods are keyword based, further subjected to manual inspection for
retrieval of relations. Despite the fact that several databases host miRNA-associations
derived from text, lower sensitivity and lack of published details for miRNA entity
recognition and associated relations identification has motivated the need for developing
comprehensive methods that are freely available for the scientific community. Additionally,
the lack of a standard corpus for miRNA-relations has caused difficulty in evaluating the
available systems. We propose methods to automatically extract mentions of miRNAs, species,
genes/proteins, disease, and relations from scientific literature. Our generated corpora,
along with dictionaries, and miRNA regular expression are freely available for academic
purposes. To our knowledge, these resources are the most comprehensive developed so far.
RESULTS: The identification of specific miRNA mentions reaches a recall of 0.94 and
precision of 0.93. Extraction of miRNA-disease and miRNA-gene relations lead to an
F1 score of up to 0.76. A comparison of the information extracted by our approach to
the databases miR2Disease and miRSel for the extraction of Alzheimer's disease
related relations shows the capability of our proposed methods in identifying correct
relations with improved sensitivity. The published resources and described methods can
help the researchers for maximal retrieval of miRNA-relations and generation of
miRNA-regulatory networks. AVAILABILITY: The training and test corpora, annotation
guidelines, developed dictionaries, and supplementary files are available at
http://www.scai.fraunhofer.de/mirna-corpora.html.
},
note={26535109[pmid]},
note={PMC4602280[pmcid]},
issn={2046-1402},
url={https://pubmed.ncbi.nlm.nih.gov/26535109},
language={eng}
}
```
| [
-0.3674076795578003,
-0.5534263253211975,
0.4852651059627533,
-0.00413883663713932,
-0.3649528920650482,
-0.07053778320550919,
-0.10757599771022797,
-0.47411349415779114,
0.8078727722167969,
0.22605910897254944,
-0.29029491543769836,
-0.5504245758056641,
-0.5548713803291321,
0.252776771783... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dbarbedillo/SMS_Spam_Multilingual_Collection_Dataset | dbarbedillo | 2023-01-13T03:07:17Z | 49 | 6 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"language:es",
"language:hi",
"language:fr",
"language:de",
"language:ar",
"language:bn",
"language:ru",
"language:pt",
"language:id",
"language:ur",
"language:ja",
"language:pa",
"langua... | 2023-01-13T03:07:17Z | 2023-01-13T02:13:03.000Z | 2023-01-13T02:13:03 | ---
license: gpl
task_categories:
- text-classification
language:
- en
- zh
- es
- hi
- fr
- de
- ar
- bn
- ru
- pt
- id
- ur
- ja
- pa
- jv
- tr
- ko
- mr
- uk
- sv
- 'no'
size_categories:
- 1K<n<10K
---
SMS Spam Multilingual Collection Dataset
Collection of Multilingual SMS messages tagged as spam or legitimate
About Dataset
Context
The SMS Spam Collection is a set of SMS-tagged messages that have been collected for SMS Spam research. It originally contained one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam and later Machine Translated into Hindi, German and French.
The text has been further translated into Spanish, Chinese, Arabic, Bengali, Russian, Portuguese, Indonesian, Urdu, Japanese, Punjabi, Javanese, Turkish, Korean, Marathi, Ukrainian, Swedish, and Norwegian using M2M100_418M a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation created by Facebook AI.
Content
The augmented Dataset contains multilingual text and corresponding labels.
ham- non-spam text
spam- spam text
Acknowledgments
The original English text was taken from- https://www.kaggle.com/uciml/sms-spam-collection-dataset
Hindi, German and French taken from - https://www.kaggle.com/datasets/rajnathpatel/multilingual-spam-data | [
-0.47456109523773193,
-0.7974671721458435,
-0.08519823104143143,
0.4887039363384247,
-0.2992260456085205,
0.02526966668665409,
-0.33944910764694214,
-0.5932126641273499,
0.2606070339679718,
0.9060424566268921,
-0.5776344537734985,
-0.6346216797828674,
-0.7741115093231201,
0.538247525691986... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EleutherAI/pile-duped-pythia-random-sampled | EleutherAI | 2023-08-25T08:07:30Z | 49 | 1 | null | [
"region:us"
] | 2023-08-25T08:07:30Z | 2023-03-27T08:03:38.000Z | 2023-03-27T08:03:38 | ---
dataset_info:
features:
- name: Index
dtype: int64
- name: 70M
dtype: float64
- name: 160M
dtype: float64
- name: 410M
dtype: float64
- name: 1B
dtype: float64
- name: 1.4B
dtype: float64
- name: 2.8B
dtype: float64
- name: 6.9B
dtype: float64
- name: 12B
dtype: float64
- name: Tokens
sequence: uint16
splits:
- name: train
num_bytes: 1020000000
num_examples: 5000000
download_size: 915501044
dataset_size: 1020000000
---
# Dataset Card for "pile-duped-pythia-random-sampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49024510383605957,
-0.3539251983165741,
0.11906787008047104,
0.3055786192417145,
-0.3939777910709381,
0.09718449413776398,
0.5880959630012512,
0.005765738897025585,
0.7962818145751953,
0.6046671271324158,
-0.5452450513839722,
-0.5854735374450684,
-0.4996420741081238,
-0.4259212017059326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BelleGroup/multiturn_chat_0.8M | BelleGroup | 2023-04-02T09:15:32Z | 49 | 80 | null | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | 2023-04-02T09:15:32Z | 2023-04-02T08:55:44.000Z | 2023-04-02T08:55:44 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
# Multiturn Chat 0.8M
## 内容
包含约80万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的*用户*与*助手*的多轮对话。
注意:此数据集是由ChatGPT产生的,未经过严格校验,内容可能包含错误。使用过程中请注意这一点。
**instruction**中包含多轮对话的上文内容,以*Human:*和*Assistant:*区分,**output**中包含当前*助手*角色的回答。
## 样例
```
{
"instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# Multiturn Chat 0.8M
## Contents
Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset.
**instruction** contains history dialog context, distinguishable by *Human:* and *Assistant:*, **output** contains the current reply by *assistant*.
## Sample
```
{
"instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:",
"input": "",
"output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. | [
-0.6191858649253845,
-0.7641544342041016,
0.32445237040519714,
0.5066962242126465,
-0.5307691097259521,
-0.20672574639320374,
-0.062343403697013855,
-0.4693004786968231,
0.510732114315033,
0.551213264465332,
-0.8213893175125122,
-0.5719627737998962,
-0.7040378451347351,
0.05969820544123649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/ami-sdm | distil-whisper | 2023-09-25T10:30:13Z | 49 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-04-11T20:12:21.000Z | 2023-04-11T20:12:21 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI SDM
---
# Distil Whisper: AMI SDM
This is a variant of the [AMI SDM](https://huggingface.co/datasets/edinburghstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
| [
-0.2523479461669922,
-0.5593910217285156,
0.3253127932548523,
0.3717023730278015,
-0.2638152539730072,
0.04056856036186218,
-0.08828933537006378,
-0.09497708082199097,
0.465664803981781,
0.5832692384719849,
-0.8366614580154419,
-0.5716754198074341,
-0.6912745237350464,
0.0770111158490181,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TCFSBN/arxiv-test-data | TCFSBN | 2023-04-26T19:47:05Z | 49 | 0 | null | [
"region:us"
] | 2023-04-26T19:47:05Z | 2023-04-26T19:46:27.000Z | 2023-04-26T19:46:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chenxwh/gen-xcopa | chenxwh | 2023-05-29T15:04:40Z | 49 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-05-29T15:04:40Z | 2023-05-25T23:28:01.000Z | 2023-05-25T23:28:01 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xenova/quickdraw | Xenova | 2023-06-24T15:04:27Z | 49 | 2 | quick-draw-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
... | 2023-06-24T15:04:27Z | 2023-06-23T21:18:53.000Z | 2023-06-23T21:18:53 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: quick-draw-dataset
pretty_name: Quick, Draw!
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
splits:
- name: train
num_bytes: 19761125464.75
num_examples: 50426266
download_size: 18927763475
dataset_size: 19761125464.75
---
# Dataset Card for Quick, Draw!
This is a processed version of Google's [Quick, Draw](https://huggingface.co/datasets/quickdraw/) dataset to be compatible with the latest versions of 🤗 Datasets that support .parquet files. NOTE: this dataset only contains the "preprocessed_bitmaps" subset of the original dataset.
| [
-0.5132231116294861,
-0.6257073879241943,
0.17065688967704773,
0.33110296726226807,
-0.7737011313438416,
0.12197085469961166,
0.1455354541540146,
-0.25978362560272217,
0.9226865768432617,
0.9632256627082825,
-0.841486930847168,
-0.7939895391464233,
-0.5412246584892273,
-0.19685669243335724... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joonhok-exo-ai/korean_law_open_data_precedents | joonhok-exo-ai | 2023-07-05T08:43:35Z | 49 | 3 | null | [
"size_categories:10K<n<100K",
"language:ko",
"license:openrail",
"legal",
"region:us"
] | 2023-07-05T08:43:35Z | 2023-06-29T12:51:31.000Z | 2023-06-29T12:51:31 | ---
language:
- ko
tags:
- legal
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [김준호](mailto:joonhok@smartfitnow.com)
### Dataset Summary
[법제처 국가법령 공동활용 센터](https://open.law.go.kr/LSO/main.do)에서 제공되는 전체 판례 데이터셋입니다.
## Dataset Structure
### Data Instances
개별 데이터의 모양은 아래와 같습니다.
판례 본문 조회 API의 출력 결과 필드를 대체로 따랐으나, 그 중 "법원종류코드" 와 "사건종류코드"는 제외했고 "판시유형" 필드는 실제 응답에서는 "판결유형"이어서 실제 응답 값대로 사용하였습니다. 마지막으로 "판례내용" 필드는 "전문" 으로 대체하였습니다.
```
{
'판례정보일련번호': 101924
'사건명': '손해배상'
'사건번호': '85다카1594'
'선고일자': 19860722,
'선고': '선고'
'법원명': '대법원'
'사건종류명': '민사'
'판결유형': '판결'
'판시사항': '가. 미성년자가 부모의 개호를 받을 수 있는 경우, 손해로서의 개호인 비용 / 나. 호프만식계산법에 의한 일실이익 산정의 적부 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우, 단리연금 현가율이 20을 넘는 경우의 일실이익 산정방법'
'판결요지': '가. 신체의 부자유로 인하여 개호인의 조력을 받을 필요가 있는 경우에는 비록 피해자가 미성년자이고 그의 부모가 개호를 할 수 있는 형편에 있다 하더라도 반드시 그 부모의 개호를 받아야 한다고 단정할 수 없음은 물론, 가사 그 부모의 개호를 받게 된다고 하더라도 이로 인하여 피해자가 입는 손해는 특별한 사정이 없는 한 통상의 개호인 비용 전액이다. 나. 호프만식계산법에 의하여 중간이자를 공제하여 장래의 일실이익의 현가를 산정하는 것은 위법한 것이 아니다. 다. 연별 호프만식계산법에 의하여 중간이자를 공제하는 경우에 단리연금현가율이 20을 넘는 경우에는 그 단리연금현가율을 그대로 적용하여 그 현가를 산정하게 되면 현가로 받게 되는 금액의 이자가 매월 입게 되는 손해액보다 많게 되어 손해액보다 더 많은 금원을 배상하게 되는 불합리한 결과를 가져오게 되므로 그 단리연금현가율이 결과적으로 20을 넘는 경우에 있어서는 그 수치표상의 단리연금현가율이 얼마인지를 불문하고 모두 20을 적용 계산함으로써 피해자가 과잉배상을 받는 일이 없도록 하여야 한다.'
'참조조문': '가.나.다. 민법 제763조'
'참조판례': '나. 대법원 1981.9.22 선고 81다588 판결, 1985.10.22 선고 85다카819 판결 / 다. 대법원 1985.10.22 선고 85다카819 판결, 1986.3.25 선고 85다카2375 판결'
'판결유형': '판결'
'전문': '【원고, 피상고인】 (...이하 생략...)'
}
```
### Data Fields
다른 필드들은 특별한 설명이 필요 없겠으나, "선고일자" 필드의 값은 스트링이 아니고 숫자입니다. 또, 일부 데이터의 "선고일자" 필드 값에는 월, 일 정보가 누락되고 연 정보만 남아 있어서 자리수가 4자리인 경우도 있습니다.
그리고 "사건명" 등 일부 필드는 값이 없는 경우도 있으니 참고 바랍니다.
## Dataset Creation
### Curation Rationale
이 데이터셋의 판례 데이터들은 공동활용 API를 통해서도 접근 가능하지만,
1. API 방식으로는 전체 데이터를 순회하는 것이 까다롭고
2. API 응답 데이터를 매번 파싱하고 전처리하는 번거로움이 있으며
3. 일부 API 응답 데이터에 있는 오류를 미리 정제하기 위하여
이 데이터셋을 만들게 되었습니다.
### Source Data
#### Initial Data Collection and Normalization
이 데이터셋은 국가법령 공동활용 센터의 "판례 목록 조회 API"와 "판례 본문 조회 API"를 이용하여 데이터를 수집하였습니다.
먼저 판례 목록 조회 API를 호출해 판례정보 일련번호들을 수집한 뒤, 각각의 일련번호로 판례 본문 조회 API를 호출하여 판례 데이터를 수집하였습니다.
판례 본문을 조회할 때는 XML과 HTML 두 가지 형식으로 요청할 수 있는데, 데이터의 완결성 검증 및 정제 작업을 위해
전체 데이터에 대해 두 가지 형식으로 모두 요청을 보낸 뒤 두 응답 데이터를 비교해 보았고, 일부 데이터에서 요청 형식에 따라
데이터 값이 다른 것을 확인하였습니다.
예를 들어 판례정보 일련번호가 152179인 판례 데이터를 XML과 HTML 형식으로 요청했을 때 "전문" 중 "【원심판결】" 부분은 각각 아래와 같습니다.
XML 형식으로 요청했을 때:
```
"1. 서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결(이하 ‘제1원심판결’이라고 한다) / 2. 서울중앙지방법원 2009. 5. 8. 선고 2009고합416 판결(이하 ‘제2원심판결’이라고 한다)"
```
HTML 형식으로 요청했을 때:
```
서울중앙지방법원 2009. 4. 3. 선고 2009고합167 판결
```
이렇게 요청 형식에 따라 "【원심판결】" 부분이 다른 데이터가 수십건 있었고 이 데이터셋에는 더 많은 정보를 담고 있는 데이터로(위 사례에서는 XML 형식 데이터) 사용하였습니다.
그 밖에도 두 가지 형식 모두에서 데이터 자체에 잘못된 데이터가 포함되는 등(법령 하이퍼링크 포맷이 깨진 경우, 익명화 포맷이 잘못된 경우 등) 오류가 있는 경우들이
몇 건 있었는데 이 데이터들은 수작업으로 수정하였습니다.
마지막으로 일부 데이터는 이미지를 포함하고 있는 경우가 있었는데 이미지들은 전부 생략하고 텍스트 부분만 포함하였습니다.
본문 데이터에 오류가 있어 수작업으로 수정한 데이터 목록: 212537, 188351, 188019, 200567
이미지를 포함하고 있는 데이터 목록:
184135,
182916,
186027,
185375,
184151,
184597,
186156,
184655,
185123,
198440,
197577
## Additional Information
### Dataset Curators
김준호([링크드인](https://www.linkedin.com/in/joonho-kim/)): 이 데이터셋은 인공지능 법률 서비스를 만들고 있는 제가 직접 필요해서 만들게 되었습니다.
### Contributions
혹시 데이터 중 잘못된 부분을 발견하신 분은 [joonhok@smartfitnow.com](mailto:joonhok@smartfitnow.com)로 연락 주시면
확인 후 반영하겠습니다. | [
-0.7663692235946655,
-0.5015878081321716,
0.26952341198921204,
0.4709354043006897,
-0.39551687240600586,
0.01565594971179962,
0.392896443605423,
-0.29277998208999634,
0.5704069137573242,
0.3235642910003662,
-0.5253772139549255,
-0.6848554015159607,
-0.6068456172943115,
0.24581949412822723,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awettig/Pile-Wikipedia-0.5B-6K-opt | awettig | 2023-07-10T19:41:27Z | 49 | 0 | null | [
"region:us"
] | 2023-07-10T19:41:27Z | 2023-07-10T19:40:11.000Z | 2023-07-10T19:40:11 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 5651184786
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1476548346
dataset_size: 5716130478
---
# Dataset Card for "Pile-Wikipedia-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8984981775283813,
-0.17695195972919464,
-0.04450887441635132,
0.14429576694965363,
-0.5157769918441772,
-0.19745443761348724,
0.33772191405296326,
-0.28167012333869934,
0.9171625375747681,
0.5624778866767883,
-0.6651838421821594,
-0.6368309259414673,
-0.574905276298523,
-0.0588567405939... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
piotr-rybak/allegro-faq | piotr-rybak | 2023-08-19T06:47:50Z | 49 | 0 | null | [
"region:us"
] | 2023-08-19T06:47:50Z | 2023-08-19T06:22:45.000Z | 2023-08-19T06:22:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IgorVolochay/russian_jokes | IgorVolochay | 2023-08-21T09:59:58Z | 49 | 1 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | 2023-08-21T09:59:58Z | 2023-08-21T09:38:50.000Z | 2023-08-21T09:38:50 | ---
license: mit
task_categories:
- text-generation
language:
- ru
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
InstaDeepAI/instanovo_ninespecies_exclude_yeast | InstaDeepAI | 2023-09-15T13:16:02Z | 49 | 1 | null | [
"license:cc0-1.0",
"region:us"
] | 2023-09-15T13:16:02Z | 2023-09-15T09:29:15.000Z | 2023-09-15T09:29:15 | ---
license: cc0-1.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sequence
dtype: string
- name: modified_sequence
dtype: string
- name: precursor_mz
dtype: float64
- name: precursor_charge
dtype: int64
- name: mz_array
sequence: float64
- name: intensity_array
sequence: float32
splits:
- name: train
num_bytes: 839098224
num_examples: 499402
- name: validation
num_bytes: 49792990
num_examples: 28572
- name: test
num_bytes: 45505134
num_examples: 27142
download_size: 1119691599
dataset_size: 934396348
---
# Dataset Card for Nine-Species excluding Yeast
Dataset used for the baseline comparison of InstaNovo to other models.
## Dataset Description
- **Repository:** [InstaNovo](https://github.com/instadeepai/InstaNovo)
- **Paper:** [De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments](https://www.biorxiv.org/content/10.1101/2023.08.30.555055v1)
### Dataset Summary
Dataset used in the original [DeepNovo](https://www.pnas.org/doi/full/10.1073/pnas.1705691114) paper.
- The training set contains 8 species excluding yeast
- The validation/test set contains the yeast species
## Dataset Structure
The dataset is tabular, where each row corresponds to a labelled MS2 spectra.
- `sequence (string)` \
The target peptide sequence excluding post-translational modifications
- `modified_sequence (string)` \
The target peptide sequence including post-translational modifications
- `precursor_mz (float64)` \
The mass-to-charge of the precursor (from MS1)
- `charge (int64)` \
The charge of the precursor (from MS1)
- `mz_array (list[float64])` \
The mass-to-charge values of the MS2 spectrum
- `mz_array (list[float32])` \
The intensity values of the MS2 spectrum
## Citation Information
If you use this dataset, please cite the original authors.
The original data is available on [MASSIVE](https://massive.ucsd.edu/ProteoSAFe/static/massive.jsp) with the identifier `MSV000081382`.
Please also cite InstaNovo:
```bibtex
@article{eloff_kalogeropoulos_2023_instanovo,
title = {De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments},
author = {Kevin Eloff and Konstantinos Kalogeropoulos and Oliver Morell and Amandla Mabona and Jakob Berg Jespersen and Wesley Williams and Sam van Beljouw and Marcin Skwark and Andreas Hougaard Laustsen and Stan J. J. Brouns and Anne Ljungars and Erwin Marten Schoof and Jeroen Van Goey and Ulrich auf dem Keller and Karim Beguir and Nicolas Lopez Carranza and Timothy Patrick Jenkins},
year = {2023},
doi = {10.1101/2023.08.30.555055},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/10.1101/2023.08.30.555055v1},
journal = {bioRxiv}
}
``` | [
-0.1475246548652649,
-0.04415706917643547,
0.257612019777298,
0.006296830251812935,
-0.39500555396080017,
0.24023331701755524,
-0.05112246051430702,
-0.1355721652507782,
0.1344374418258667,
0.07639893889427185,
-0.30310943722724915,
-0.6395232081413269,
-0.3771638572216034,
0.4629525840282... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vision-Flan/vision-flan_191-task_1k | Vision-Flan | 2023-09-21T18:11:37Z | 49 | 8 | null | [
"task_categories:visual-question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-09-21T18:11:37Z | 2023-09-20T16:54:20.000Z | 2023-09-20T16:54:20 | ---
task_categories:
- visual-question-answering
language:
- en
pretty_name: Vision-Flan
size_categories:
- 100K<n<1M
---
# 🚀 Vision-Flan Dataset
vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.
It is constructed for visual instruction tuning and for building large-scale vision-language models.
## Paper or blog for more information:
https://github.com/VT-NLP/MultiInstruct/
https://vision-flan.github.io/
*Paper coming soon* 😊
## Citation
*Paper coming soon* 😊. If you use Vision-Flan, please use the following cites:
```
@misc{visionFlan2023,
title = {Vision-Flan:Scaling Visual Instruction Tuning},
url = {https://vision-flan.github.io/},
author = {Zhiyang Xu and Trevor Ashby and Chao Feng and Rulin Shao and Ying Shen and Di Jin and Qifan Wang and Lifu Huang},
month = {Sep},
year = {2023}
}
```
```
@inproceedings{DBLP:conf/acl/XuSH23,
author = {Zhiyang Xu and Ying Shen and Lifu Huang},
editor = {Anna Rogers and Jordan L. Boyd{-}Graber and Naoaki Okazaki},
title = {MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2023, Toronto, Canada, July 9-14, 2023},
pages = {11445--11465},
publisher = {Association for Computational Linguistics},
year = {2023},
url = {https://doi.org/10.18653/v1/2023.acl-long.641},
doi = {10.18653/v1/2023.acl-long.641},
timestamp = {Thu, 10 Aug 2023 12:35:59 +0200},
biburl = {https://dblp.org/rec/conf/acl/XuSH23.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License:
Please carefully check the licenses for all the datasets on this [page](https://vision-flan.github.io/tasks.html) before use.
## Contact:
If you have any questions or concerns please contact us at zhiyangx@vt.edu . | [
-0.370915949344635,
-0.6877295970916748,
0.12490706890821457,
0.08845049142837524,
-0.10970267653465271,
-0.28003430366516113,
-0.3779292702674866,
-0.43990567326545715,
0.10178858041763306,
0.4801107943058014,
-0.7139278054237366,
-0.6633367538452148,
-0.42115890979766846,
-0.055342845618... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/ted_en_id | SEACrowd | 2023-09-26T12:30:00Z | 49 | 0 | null | [
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] | 2023-09-26T12:30:00Z | 2023-09-26T11:14:05.000Z | 2023-09-26T11:14:05 | ---
tags:
- machine-translation
language:
- ind
- eng
---
# ted_en_id
TED En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the TED talk transcripts. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{qi2018and,
title={When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
author={Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
pages={529--535},
year={2018}
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898",
abstract = "Natural language generation (NLG) benchmarks provide an important avenue to measure progress and develop better NLG systems. Unfortunately, the lack of publicly available NLG benchmarks for low-resource languages poses a challenging barrier for building NLG systems that work well for languages with limited amounts of data. Here we introduce IndoNLG, the first benchmark to measure natural language generation (NLG) progress in three low-resource{---}yet widely spoken{---}languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems today. Concretely, IndoNLG covers six tasks: summarization, question answering, chit-chat, and three different pairs of machine translation (MT) tasks. We collate a clean pretraining corpus of Indonesian, Sundanese, and Javanese datasets, Indo4B-Plus, which is used to pretrain our models: IndoBART and IndoGPT. We show that IndoBART and IndoGPT achieve competitive performance on all tasks{---}despite using only one-fifth the parameters of a larger multilingual model, mBART-large (Liu et al., 2020). This finding emphasizes the importance of pretraining on closely related, localized languages to achieve more efficient learning and faster inference at very low-resource languages like Javanese and Sundanese.",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.4617532193660736,
-0.7314817905426025,
0.09130492061376572,
0.4481833577156067,
-0.5876227021217346,
-0.14720271527767181,
-0.812358021736145,
-0.31761857867240906,
0.19198060035705566,
0.23898155987262726,
-0.43955472111701965,
-0.5695555806159973,
-0.5138802528381348,
0.62816607952117... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/news_en_id | SEACrowd | 2023-09-26T12:33:03Z | 49 | 0 | null | [
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] | 2023-09-26T12:33:03Z | 2023-09-26T11:17:22.000Z | 2023-09-26T11:17:22 | ---
tags:
- machine-translation
language:
- ind
- eng
---
# news_en_id
News En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{guntara-etal-2020-benchmarking,
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
author = "Guntara, Tri Wahyu and
Aji, Alham Fikri and
Prasojo, Radityo Eko",
booktitle = "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.bucc-1.6",
pages = "35--43",
language = "English",
ISBN = "979-10-95546-42-9",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/gunnxx/indonesian-mt-data](https://github.com/gunnxx/indonesian-mt-data)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5667195916175842,
-0.5928900837898254,
0.04147212579846382,
0.5407836437225342,
-0.6217692494392395,
-0.023373188450932503,
-0.41061532497406006,
-0.22637175023555756,
0.308531254529953,
0.4528213441371918,
-0.45651358366012573,
-0.8856889009475708,
-0.6819232702255249,
0.65748262405395... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vishnupriyavr/spotify-million-song-dataset | vishnupriyavr | 2023-10-02T12:17:02Z | 49 | 0 | null | [
"license:cc0-1.0",
"region:us"
] | 2023-10-02T12:17:02Z | 2023-10-02T12:16:58.000Z | 2023-10-02T12:16:58 | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: notshrirang/spotify-million-song-dataset
---
# Dataset Card for Spotify Million Song Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/notshrirang/spotify-million-song-dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is Spotify Million Song Dataset. This dataset contains song names, artists names, link to the song and lyrics. This dataset can be used for recommending songs, classifying or clustering songs.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@notshrirang](https://kaggle.com/notshrirang)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | [
-0.5144586563110352,
-0.3612844944000244,
0.1790030300617218,
0.5753556489944458,
-0.2928958237171173,
0.10069043189287186,
-0.4236355721950531,
-0.23255911469459534,
0.6670805811882019,
0.8387544751167297,
-1.1257766485214233,
-1.1202152967453003,
-0.6643354296684265,
-0.01976522244513034... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/MedNLI_test | hippocrates | 2023-10-18T19:46:58Z | 49 | 0 | null | [
"region:us"
] | 2023-10-18T19:46:58Z | 2023-10-12T15:48:06.000Z | 2023-10-12T15:48:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 4828284
num_examples: 11232
- name: valid
num_bytes: 606323
num_examples: 1395
- name: test
num_bytes: 605516
num_examples: 1422
download_size: 0
dataset_size: 6040123
---
# Dataset Card for "MedNLI_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.534270703792572,
-0.2851463854312897,
0.061185576021671295,
0.03685024380683899,
-0.07368084788322449,
-0.1512785255908966,
0.24585440754890442,
-0.17750205099582672,
0.9151660799980164,
0.36646580696105957,
-0.905876874923706,
-0.654577910900116,
-0.30198994278907776,
-0.15192186832427... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MananSantoki/vadodara-jsonl-converted | MananSantoki | 2023-10-25T11:29:27Z | 49 | 0 | null | [
"region:us"
] | 2023-10-25T11:29:27Z | 2023-10-25T11:29:26.000Z | 2023-10-25T11:29:26 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 103428
num_examples: 410
download_size: 40823
dataset_size: 103428
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vadodara-jsonl-converted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.440216988325119,
-0.37366822361946106,
0.3446640968322754,
0.14424371719360352,
-0.34755587577819824,
0.030672738328576088,
0.01130294892936945,
0.017752576619386673,
0.42681506276130676,
0.8891132473945618,
-0.7318016886711121,
-0.8825642466545105,
-0.5889288187026978,
-0.1329080909490... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kheopsai/codevivil_instruct | kheopsai | 2023-10-26T13:16:19Z | 49 | 0 | null | [
"region:us"
] | 2023-10-26T13:16:19Z | 2023-10-26T13:10:23.000Z | 2023-10-26T13:10:23 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lavita/medical-qa-datasets | lavita | 2023-11-17T20:49:51Z | 49 | 2 | null | [
"task_categories:question-answering",
"language:en",
"medical",
"healthcare",
"clinical",
"region:us"
] | 2023-11-17T20:49:51Z | 2023-11-02T03:06:29.000Z | 2023-11-02T03:06:29 | ---
language:
- en
task_categories:
- question-answering
tags:
- medical
- healthcare
- clinical
dataset_info:
- config_name: all-processed
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 269589377
num_examples: 239357
download_size: 155267884
dataset_size: 269589377
- config_name: chatdoctor-icliniq
features:
- name: input
dtype: string
- name: answer_icliniq
dtype: string
- name: answer_chatgpt
dtype: string
- name: answer_chatdoctor
dtype: string
splits:
- name: test
num_bytes: 16962106
num_examples: 7321
download_size: 9373079
dataset_size: 16962106
- config_name: chatdoctor_healthcaremagic
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 126454896
num_examples: 112165
download_size: 70518147
dataset_size: 126454896
- config_name: med-qa-en-4options-source
features:
- name: meta_info
dtype: string
- name: question
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
- name: options
list:
- name: key
dtype: string
- name: value
dtype: string
- name: metamap_phrases
sequence: string
splits:
- name: train
num_bytes: 15420106
num_examples: 10178
- name: test
num_bytes: 1976582
num_examples: 1273
- name: validation
num_bytes: 1925861
num_examples: 1272
download_size: 9684872
dataset_size: 19322549
- config_name: med-qa-en-5options-source
features:
- name: meta_info
dtype: string
- name: question
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
- name: options
list:
- name: key
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 9765366
num_examples: 10178
- name: test
num_bytes: 1248299
num_examples: 1273
- name: validation
num_bytes: 1220927
num_examples: 1272
download_size: 6704270
dataset_size: 12234592
- config_name: medical_meadow_cord19
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1336834621
num_examples: 821007
download_size: 752855706
dataset_size: 1336834621
- config_name: medical_meadow_health_advice
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2196957
num_examples: 8676
download_size: 890725
dataset_size: 2196957
- config_name: medical_meadow_medical_flashcards
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 16453987
num_examples: 33955
download_size: 6999958
dataset_size: 16453987
- config_name: medical_meadow_mediqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15690088
num_examples: 2208
download_size: 3719929
dataset_size: 15690088
- config_name: medical_meadow_medqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10225018
num_examples: 10178
download_size: 5505473
dataset_size: 10225018
- config_name: medical_meadow_mmmlu
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1442124
num_examples: 3787
download_size: 685604
dataset_size: 1442124
- config_name: medical_meadow_pubmed_causal
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 846695
num_examples: 2446
download_size: 210947
dataset_size: 846695
- config_name: medical_meadow_wikidoc
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10224074
num_examples: 10000
download_size: 5593178
dataset_size: 10224074
- config_name: medical_meadow_wikidoc_patient_information
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3262558
num_examples: 5942
download_size: 1544286
dataset_size: 3262558
- config_name: medmcqa
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
splits:
- name: train
num_bytes: 131903297
num_examples: 182822
- name: test
num_bytes: 1399350
num_examples: 6150
- name: validation
num_bytes: 2221428
num_examples: 4183
download_size: 88311484
dataset_size: 135524075
- config_name: mmmlu-anatomy
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 31810
num_examples: 134
- name: validation
num_bytes: 2879
num_examples: 13
- name: train
num_bytes: 717
num_examples: 4
download_size: 35632
dataset_size: 35406
- config_name: mmmlu-clinical-knowledge
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60710
num_examples: 264
- name: validation
num_bytes: 6231
num_examples: 28
- name: train
num_bytes: 1026
num_examples: 4
download_size: 60329
dataset_size: 67967
- config_name: mmmlu-college-biology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47319
num_examples: 143
- name: validation
num_bytes: 4462
num_examples: 15
- name: train
num_bytes: 1103
num_examples: 4
download_size: 49782
dataset_size: 52884
- config_name: mmmlu-college-medicine
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 80363
num_examples: 172
- name: validation
num_bytes: 7079
num_examples: 21
- name: train
num_bytes: 1434
num_examples: 4
download_size: 63671
dataset_size: 88876
- config_name: mmmlu-medical-genetics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 20021
num_examples: 99
- name: validation
num_bytes: 2590
num_examples: 10
- name: train
num_bytes: 854
num_examples: 4
download_size: 29043
dataset_size: 23465
- config_name: mmmlu-professional-medicine
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 214495
num_examples: 271
- name: validation
num_bytes: 23003
num_examples: 30
- name: train
num_bytes: 2531
num_examples: 4
download_size: 157219
dataset_size: 240029
- config_name: pubmed-qa
features:
- name: QUESTION
dtype: string
- name: CONTEXTS
sequence: string
- name: LABELS
sequence: string
- name: MESHES
sequence: string
- name: YEAR
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: final_decision
dtype: string
- name: LONG_ANSWER
dtype: string
splits:
- name: train
num_bytes: 421508218
num_examples: 200000
- name: validation
num_bytes: 23762218
num_examples: 11269
download_size: 233536544
dataset_size: 445270436
- config_name: truthful-qa-generation
features:
- name: type
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
- name: source
dtype: string
splits:
- name: validation
num_bytes: 473382
num_examples: 817
download_size: 222648
dataset_size: 473382
- config_name: truthful-qa-multiple-choice
features:
- name: question
dtype: string
- name: mc1_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
- name: mc2_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
splits:
- name: validation
num_bytes: 609082
num_examples: 817
download_size: 271032
dataset_size: 609082
- config_name: usmle-self-assessment-step1
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: H
dtype: string
- name: I
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 80576
num_examples: 94
download_size: 60550
dataset_size: 80576
- config_name: usmle-self-assessment-step2
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 133267
num_examples: 109
download_size: 80678
dataset_size: 133267
- config_name: usmle-self-assessment-step3
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 156286
num_examples: 122
download_size: 98163
dataset_size: 156286
configs:
- config_name: all-processed
data_files:
- split: train
path: all-processed/train-*
- config_name: chatdoctor-icliniq
data_files:
- split: test
path: chatdoctor-icliniq/test-*
- config_name: chatdoctor_healthcaremagic
data_files:
- split: train
path: chatdoctor_healthcaremagic/train-*
- config_name: med-qa-en-4options-source
data_files:
- split: train
path: med-qa-en-4options-source/train-*
- split: test
path: med-qa-en-4options-source/test-*
- split: validation
path: med-qa-en-4options-source/validation-*
- config_name: med-qa-en-5options-source
data_files:
- split: train
path: med-qa-en-5options-source/train-*
- split: test
path: med-qa-en-5options-source/test-*
- split: validation
path: med-qa-en-5options-source/validation-*
- config_name: medical_meadow_cord19
data_files:
- split: train
path: medical_meadow_cord19/train-*
- config_name: medical_meadow_health_advice
data_files:
- split: train
path: medical_meadow_health_advice/train-*
- config_name: medical_meadow_medical_flashcards
data_files:
- split: train
path: medical_meadow_medical_flashcards/train-*
- config_name: medical_meadow_mediqa
data_files:
- split: train
path: medical_meadow_mediqa/train-*
- config_name: medical_meadow_medqa
data_files:
- split: train
path: medical_meadow_medqa/train-*
- config_name: medical_meadow_mmmlu
data_files:
- split: train
path: medical_meadow_mmmlu/train-*
- config_name: medical_meadow_pubmed_causal
data_files:
- split: train
path: medical_meadow_pubmed_causal/train-*
- config_name: medical_meadow_wikidoc
data_files:
- split: train
path: medical_meadow_wikidoc/train-*
- config_name: medical_meadow_wikidoc_patient_information
data_files:
- split: train
path: medical_meadow_wikidoc_patient_information/train-*
- config_name: medmcqa
data_files:
- split: train
path: medmcqa/train-*
- split: test
path: medmcqa/test-*
- split: validation
path: medmcqa/validation-*
- config_name: mmmlu-anatomy
data_files:
- split: test
path: mmmlu-anatomy/test-*
- split: validation
path: mmmlu-anatomy/validation-*
- split: train
path: mmmlu-anatomy/train-*
- config_name: mmmlu-clinical-knowledge
data_files:
- split: test
path: mmmlu-clinical-knowledge/test-*
- split: validation
path: mmmlu-clinical-knowledge/validation-*
- split: train
path: mmmlu-clinical-knowledge/train-*
- config_name: mmmlu-college-biology
data_files:
- split: test
path: mmmlu-college-biology/test-*
- split: validation
path: mmmlu-college-biology/validation-*
- split: train
path: mmmlu-college-biology/train-*
- config_name: mmmlu-college-medicine
data_files:
- split: test
path: mmmlu-college-medicine/test-*
- split: validation
path: mmmlu-college-medicine/validation-*
- split: train
path: mmmlu-college-medicine/train-*
- config_name: mmmlu-medical-genetics
data_files:
- split: test
path: mmmlu-medical-genetics/test-*
- split: validation
path: mmmlu-medical-genetics/validation-*
- split: train
path: mmmlu-medical-genetics/train-*
- config_name: mmmlu-professional-medicine
data_files:
- split: test
path: mmmlu-professional-medicine/test-*
- split: validation
path: mmmlu-professional-medicine/validation-*
- split: train
path: mmmlu-professional-medicine/train-*
- config_name: pubmed-qa
data_files:
- split: train
path: pubmed-qa/train-*
- split: validation
path: pubmed-qa/validation-*
- config_name: truthful-qa-generation
data_files:
- split: validation
path: truthful-qa-generation/validation-*
- config_name: truthful-qa-multiple-choice
data_files:
- split: validation
path: truthful-qa-multiple-choice/validation-*
- config_name: usmle-self-assessment-step1
data_files:
- split: test
path: usmle-self-assessment-step1/test-*
- config_name: usmle-self-assessment-step2
data_files:
- split: test
path: usmle-self-assessment-step2/test-*
- config_name: usmle-self-assessment-step3
data_files:
- split: test
path: usmle-self-assessment-step3/test-*
---
* `all-processed` dataset is a concatenation of of `medical-meadow-*` and `chatdoctor_healthcaremagic` datasets
* The `Chat` `Doctor` term is replaced by the `chatbot` term in the `chatdoctor_healthcaremagic` dataset
* Similar to the literature the `medical_meadow_cord19` dataset is subsampled to 50,000 samples
* `truthful-qa-*` is a benchmark dataset for evaluating the truthfulness of models in text generation, which is used in Llama 2 paper. Within this dataset, there are 55 and 16 questions related to `Health` and `Nutrition`, respectively, making it a valuable resource for medical question-answering scenarios. | [
-0.015826068818569183,
-0.9763717651367188,
0.41002869606018066,
-0.14670616388320923,
-0.12678688764572144,
0.18483185768127441,
0.2478935867547989,
-0.5209649205207825,
0.4572297930717468,
0.8896099925041199,
-0.9548065662384033,
-0.5686417818069458,
-0.5145524144172668,
0.23772045969963... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anhdungitvn/elementary_math_qa | anhdungitvn | 2023-11-06T12:22:42Z | 49 | 0 | null | [
"region:us"
] | 2023-11-06T12:22:42Z | 2023-11-06T12:22:38.000Z | 2023-11-06T12:22:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: explanation
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 303871
num_examples: 1200
- name: test
num_bytes: 31224
num_examples: 189
download_size: 172910
dataset_size: 335095
---
# Dataset Card for "elementary_math_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6553354263305664,
-0.4768851101398468,
0.29363110661506653,
0.1832527220249176,
-0.11913421005010605,
0.013927147723734379,
0.5112901926040649,
0.108037069439888,
0.7413151860237122,
0.27433377504348755,
-0.7287679314613342,
-0.6972149014472961,
-0.28820353746414185,
-0.3666057288646698... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GHOFRANEE/Alcora2 | GHOFRANEE | 2023-11-08T14:17:12Z | 49 | 0 | null | [
"region:us"
] | 2023-11-08T14:17:12Z | 2023-11-08T13:39:29.000Z | 2023-11-08T13:39:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 260321
num_examples: 102
download_size: 20852
dataset_size: 260321
---
# Dataset Card for "Alcora2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5535094738006592,
-0.19388194382190704,
0.303329199552536,
0.3274921476840973,
-0.2487838715314865,
0.07915612310171127,
0.5194785594940186,
-0.3035435676574707,
0.9773650169372559,
0.38317155838012695,
-0.7537569999694824,
-0.861836850643158,
-0.5692111253738403,
-0.5728634595870972,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
berardi6/birex-13.0 | berardi6 | 2023-11-20T11:24:49Z | 49 | 0 | null | [
"region:us"
] | 2023-11-20T11:24:49Z | 2023-11-17T13:59:29.000Z | 2023-11-17T13:59:29 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 730218
num_examples: 1545
download_size: 346406
dataset_size: 730218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "birex-13.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6923342943191528,
-0.12117484211921692,
0.1896723508834839,
0.582942008972168,
-0.29947593808174133,
0.0391245111823082,
0.2949276566505432,
-0.32422885298728943,
0.634665310382843,
0.4964587986469269,
-0.8947261571884155,
-0.6544390320777893,
-0.6599549055099487,
-0.05427953600883484,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nimsi2613/MedQuad | Nimsi2613 | 2023-11-21T01:29:05Z | 49 | 0 | null | [
"region:us"
] | 2023-11-21T01:29:05Z | 2023-11-21T01:29:01.000Z | 2023-11-21T01:29:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 19604993.268292684
num_examples: 14800
- name: test
num_bytes: 794797.0243902439
num_examples: 600
- name: valid
num_bytes: 1324661.7073170731
num_examples: 1000
download_size: 10176212
dataset_size: 21724452.0
---
# Dataset Card for "MedQuad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.606951117515564,
-0.2964751422405243,
0.2629885971546173,
0.057469841092824936,
-0.2635461986064911,
0.01856149174273014,
0.3498162031173706,
-0.10291232168674469,
0.8849426507949829,
0.5731392502784729,
-0.7761052250862122,
-0.8176345825195312,
-0.5164172649383545,
-0.3059447109699249,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/ag-tlm | yxchar | 2021-11-04T21:20:14Z | 48 | 0 | null | [
"region:us"
] | 2021-11-04T21:20:14Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
quickdraw | null | 2023-06-26T12:09:26Z | 48 | 8 | quick-draw-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:1704.... | 2023-06-26T12:09:26Z | 2022-06-09T09:56:43.000Z | 2022-06-09T09:56:43 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: quick-draw-dataset
pretty_name: Quick, Draw!
dataset_info:
- config_name: raw
features:
- name: key_id
dtype: string
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: recognized
dtype: bool
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: countrycode
dtype: string
- name: drawing
sequence:
- name: x
sequence: float32
- name: y
sequence: float32
- name: t
sequence: int32
splits:
- name: train
num_bytes: 134763164880
num_examples: 50426266
download_size: 194810597157
dataset_size: 134763164880
- config_name: preprocessed_simplified_drawings
features:
- name: key_id
dtype: string
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: recognized
dtype: bool
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: countrycode
dtype: string
- name: drawing
sequence:
- name: x
sequence: uint8
- name: y
sequence: uint8
splits:
- name: train
num_bytes: 9741454188
num_examples: 50426266
download_size: 5889968422
dataset_size: 9741454188
- config_name: preprocessed_bitmaps
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
splits:
- name: train
num_bytes: 20372624628
num_examples: 50426266
download_size: 39534220144
dataset_size: 20372624628
- config_name: sketch_rnn
features:
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: drawing
dtype:
array2_d:
shape:
- 3
dtype: int16
splits:
- name: train
num_bytes: 13056229420
num_examples: 24150000
- name: validation
num_bytes: 466485546
num_examples: 862500
- name: test
num_bytes: 466191706
num_examples: 862500
download_size: 3928904911
dataset_size: 13988906672
- config_name: sketch_rnn_full
features:
- name: word
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: drawing
dtype:
array2_d:
shape:
- 3
dtype: int16
splits:
- name: train
num_bytes: 23725242280
num_examples: 43988874
- name: validation
num_bytes: 466485546
num_examples: 862500
- name: test
num_bytes: 466191706
num_examples: 862500
download_size: 6928245966
dataset_size: 24657919532
---
# Dataset Card for Quick, Draw!
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Quick, Draw! homepage](https://quickdraw.withgoogle.com/data)
- **Repository:** [Quick, Draw! repository](https://github.com/googlecreativelab/quickdraw-dataset)
- **Paper:** [A Neural Representation of Sketch Drawings](https://arxiv.org/abs/1704.03477v4)
- **Leaderboard:** [Quick, Draw! Doodle Recognition Challenge](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard)
- **Point of Contact:** [Quick, Draw! support](mailto:quickdraw-support@google.com)
### Dataset Summary
The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given sketch into one of 345 classes.
The (closed) leaderboard for this task is available [here](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard).
### Languages
English.
## Dataset Structure
### Data Instances
#### `raw`
A data point comprises a drawing and its metadata.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 13, 28, 0, 851730),
'countrycode': 'MY',
'drawing': {
'x': [[379.0, 380.0, 381.0, 381.0, 381.0, 381.0, 382.0], [362.0, 368.0, 375.0, 380.0, 388.0, 393.0, 399.0, 404.0, 409.0, 410.0, 410.0, 405.0, 397.0, 392.0, 384.0, 377.0, 370.0, 363.0, 356.0, 348.0, 342.0, 336.0, 333.0], ..., [477.0, 473.0, 471.0, 469.0, 468.0, 466.0, 464.0, 462.0, 461.0, 469.0, 475.0, 483.0, 491.0, 499.0, 510.0, 521.0, 531.0, 540.0, 548.0, 558.0, 566.0, 576.0, 583.0, 590.0, 595.0, 598.0, 597.0, 596.0, 594.0, 592.0, 590.0, 589.0, 588.0, 586.0]],
'y': [[1.0, 7.0, 15.0, 21.0, 27.0, 32.0, 32.0], [17.0, 17.0, 17.0, 17.0, 16.0, 16.0, 16.0, 16.0, 18.0, 23.0, 29.0, 32.0, 32.0, 32.0, 29.0, 27.0, 25.0, 23.0, 21.0, 19.0, 17.0, 16.0, 14.0], ..., [151.0, 146.0, 139.0, 131.0, 125.0, 119.0, 113.0, 107.0, 102.0, 99.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 100.0, 102.0, 104.0, 105.0, 110.0, 115.0, 121.0, 126.0, 131.0, 137.0, 142.0, 148.0, 150.0]],
't': [[0, 84, 100, 116, 132, 148, 260], [573, 636, 652, 660, 676, 684, 701, 724, 796, 838, 860, 956, 973, 979, 989, 995, 1005, 1012, 1020, 1028, 1036, 1053, 1118], ..., [8349, 8446, 8468, 8484, 8500, 8516, 8541, 8557, 8573, 8685, 8693, 8702, 8710, 8718, 8724, 8732, 8741, 8748, 8757, 8764, 8773, 8780, 8788, 8797, 8804, 8965, 8996, 9029, 9045, 9061, 9076, 9092, 9109, 9167]]
}
}
```
#### `preprocessed_simplified_drawings`
The simplified version of the dataset generated from the `raw` data with the simplified vectors, removed timing information, and the data positioned and scaled into a 256x256 region.
The simplification process was:
1.Align the drawing to the top-left corner, to have minimum values of 0.
2.Uniformly scale the drawing, to have a maximum value of 255.
3.Resample all strokes with a 1 pixel spacing.
4.Simplify all strokes using the [Ramer-Douglas-Peucker algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) with an epsilon value of 2.0.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 15, 28),
'countrycode': 'MY',
'drawing': {
'x': [[31, 32], [27, 37, 38, 35, 21], [25, 28, 38, 39], [33, 34, 32], [5, 188, 254, 251, 241, 185, 45, 9, 0], [35, 35, 43, 125, 126], [35, 76, 80, 77], [53, 50, 54, 80, 78]],
'y': [[0, 7], [4, 4, 6, 7, 3], [5, 10, 10, 7], [4, 33, 44], [50, 50, 54, 83, 86, 90, 86, 77, 52], [85, 91, 92, 96, 90], [35, 37, 41, 47], [34, 23, 22, 23, 34]]
}
}
```
#### `preprocessed_bitmaps` (default configuration)
This configuration contains the 28x28 grayscale bitmap images that were generated from the simplified data, but are aligned to the center of the drawing's bounding box rather than the top-left corner. The code that was used for generation is available [here](https://github.com/googlecreativelab/quickdraw-dataset/issues/19#issuecomment-402247262).
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x10B5B102828>,
'label': 0
}
```
#### `sketch_rnn` and `sketch_rnn_full`
The `sketch_rnn_full` configuration stores the data in the format suitable for inputs into a recurrent neural network and was used for for training the [Sketch-RNN](https://arxiv.org/abs/1704.03477) model. Unlike `sketch_rnn` where the samples have been randomly selected from each category, the `sketch_rnn_full` configuration contains the full data for each category.
```
{
'word': 0,
'drawing': [[132, 0, 0], [23, 4, 0], [61, 1, 0], [76, 0, 0], [22, -4, 0], [152, 0, 0], [50, -5, 0], [36, -10, 0], [8, 26, 0], [0, 69, 0], [-2, 11, 0], [-8, 10, 0], [-56, 24, 0], [-23, 14, 0], [-99, 40, 0], [-45, 6, 0], [-21, 6, 0], [-170, 2, 0], [-81, 0, 0], [-29, -9, 0], [-94, -19, 0], [-48, -24, 0], [-6, -16, 0], [2, -36, 0], [7, -29, 0], [23, -45, 0], [13, -6, 0], [41, -8, 0], [42, -2, 1], [392, 38, 0], [2, 19, 0], [11, 33, 0], [13, 0, 0], [24, -9, 0], [26, -27, 0], [0, -14, 0], [-8, -10, 0], [-18, -5, 0], [-14, 1, 0], [-23, 4, 0], [-21, 12, 1], [-152, 18, 0], [10, 46, 0], [26, 6, 0], [38, 0, 0], [31, -2, 0], [7, -2, 0], [4, -6, 0], [-10, -21, 0], [-2, -33, 0], [-6, -11, 0], [-46, 1, 0], [-39, 18, 0], [-19, 4, 1], [-122, 0, 0], [-2, 38, 0], [4, 16, 0], [6, 4, 0], [78, 0, 0], [4, -8, 0], [-8, -36, 0], [0, -22, 0], [-6, -2, 0], [-32, 14, 0], [-58, 13, 1], [-96, -12, 0], [-10, 27, 0], [2, 32, 0], [102, 0, 0], [1, -7, 0], [-27, -17, 0], [-4, -6, 0], [-1, -34, 0], [-64, 8, 1], [129, -138, 0], [-108, 0, 0], [-8, 12, 0], [-1, 15, 0], [12, 15, 0], [20, 5, 0], [61, -3, 0], [24, 6, 0], [19, 0, 0], [5, -4, 0], [2, 14, 1]]
}
```
### Data Fields
#### `raw`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A dictionary where `x` and `y` are the pixel coordinates, and `t` is the time in milliseconds since the first point. `x` and `y` are real-valued while `t` is an integer. `x`, `y` and `t` match in lenght and are represented as lists of lists where each sublist corresponds to a single stroke. The raw drawings can have vastly different bounding boxes and number of points due to the different devices used for display and input.
#### `preprocessed_simplified_drawings`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A simplified drawing represented as a dictionary where `x` and `y` are the pixel coordinates. The simplification processed is described in the `Data Instances` section.
#### `preprocessed_bitmaps` (default configuration)
- `image`: A `PIL.Image.Image` object containing the 28x28 grayscale bitmap. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: Category the player was prompted to draw.
<details>
<summary>
Click here to see the full class labels mapping:
</summary>
|id|class|
|---|---|
|0|aircraft carrier|
|1|airplane|
|2|alarm clock|
|3|ambulance|
|4|angel|
|5|animal migration|
|6|ant|
|7|anvil|
|8|apple|
|9|arm|
|10|asparagus|
|11|axe|
|12|backpack|
|13|banana|
|14|bandage|
|15|barn|
|16|baseball bat|
|17|baseball|
|18|basket|
|19|basketball|
|20|bat|
|21|bathtub|
|22|beach|
|23|bear|
|24|beard|
|25|bed|
|26|bee|
|27|belt|
|28|bench|
|29|bicycle|
|30|binoculars|
|31|bird|
|32|birthday cake|
|33|blackberry|
|34|blueberry|
|35|book|
|36|boomerang|
|37|bottlecap|
|38|bowtie|
|39|bracelet|
|40|brain|
|41|bread|
|42|bridge|
|43|broccoli|
|44|broom|
|45|bucket|
|46|bulldozer|
|47|bus|
|48|bush|
|49|butterfly|
|50|cactus|
|51|cake|
|52|calculator|
|53|calendar|
|54|camel|
|55|camera|
|56|camouflage|
|57|campfire|
|58|candle|
|59|cannon|
|60|canoe|
|61|car|
|62|carrot|
|63|castle|
|64|cat|
|65|ceiling fan|
|66|cell phone|
|67|cello|
|68|chair|
|69|chandelier|
|70|church|
|71|circle|
|72|clarinet|
|73|clock|
|74|cloud|
|75|coffee cup|
|76|compass|
|77|computer|
|78|cookie|
|79|cooler|
|80|couch|
|81|cow|
|82|crab|
|83|crayon|
|84|crocodile|
|85|crown|
|86|cruise ship|
|87|cup|
|88|diamond|
|89|dishwasher|
|90|diving board|
|91|dog|
|92|dolphin|
|93|donut|
|94|door|
|95|dragon|
|96|dresser|
|97|drill|
|98|drums|
|99|duck|
|100|dumbbell|
|101|ear|
|102|elbow|
|103|elephant|
|104|envelope|
|105|eraser|
|106|eye|
|107|eyeglasses|
|108|face|
|109|fan|
|110|feather|
|111|fence|
|112|finger|
|113|fire hydrant|
|114|fireplace|
|115|firetruck|
|116|fish|
|117|flamingo|
|118|flashlight|
|119|flip flops|
|120|floor lamp|
|121|flower|
|122|flying saucer|
|123|foot|
|124|fork|
|125|frog|
|126|frying pan|
|127|garden hose|
|128|garden|
|129|giraffe|
|130|goatee|
|131|golf club|
|132|grapes|
|133|grass|
|134|guitar|
|135|hamburger|
|136|hammer|
|137|hand|
|138|harp|
|139|hat|
|140|headphones|
|141|hedgehog|
|142|helicopter|
|143|helmet|
|144|hexagon|
|145|hockey puck|
|146|hockey stick|
|147|horse|
|148|hospital|
|149|hot air balloon|
|150|hot dog|
|151|hot tub|
|152|hourglass|
|153|house plant|
|154|house|
|155|hurricane|
|156|ice cream|
|157|jacket|
|158|jail|
|159|kangaroo|
|160|key|
|161|keyboard|
|162|knee|
|163|knife|
|164|ladder|
|165|lantern|
|166|laptop|
|167|leaf|
|168|leg|
|169|light bulb|
|170|lighter|
|171|lighthouse|
|172|lightning|
|173|line|
|174|lion|
|175|lipstick|
|176|lobster|
|177|lollipop|
|178|mailbox|
|179|map|
|180|marker|
|181|matches|
|182|megaphone|
|183|mermaid|
|184|microphone|
|185|microwave|
|186|monkey|
|187|moon|
|188|mosquito|
|189|motorbike|
|190|mountain|
|191|mouse|
|192|moustache|
|193|mouth|
|194|mug|
|195|mushroom|
|196|nail|
|197|necklace|
|198|nose|
|199|ocean|
|200|octagon|
|201|octopus|
|202|onion|
|203|oven|
|204|owl|
|205|paint can|
|206|paintbrush|
|207|palm tree|
|208|panda|
|209|pants|
|210|paper clip|
|211|parachute|
|212|parrot|
|213|passport|
|214|peanut|
|215|pear|
|216|peas|
|217|pencil|
|218|penguin|
|219|piano|
|220|pickup truck|
|221|picture frame|
|222|pig|
|223|pillow|
|224|pineapple|
|225|pizza|
|226|pliers|
|227|police car|
|228|pond|
|229|pool|
|230|popsicle|
|231|postcard|
|232|potato|
|233|power outlet|
|234|purse|
|235|rabbit|
|236|raccoon|
|237|radio|
|238|rain|
|239|rainbow|
|240|rake|
|241|remote control|
|242|rhinoceros|
|243|rifle|
|244|river|
|245|roller coaster|
|246|rollerskates|
|247|sailboat|
|248|sandwich|
|249|saw|
|250|saxophone|
|251|school bus|
|252|scissors|
|253|scorpion|
|254|screwdriver|
|255|sea turtle|
|256|see saw|
|257|shark|
|258|sheep|
|259|shoe|
|260|shorts|
|261|shovel|
|262|sink|
|263|skateboard|
|264|skull|
|265|skyscraper|
|266|sleeping bag|
|267|smiley face|
|268|snail|
|269|snake|
|270|snorkel|
|271|snowflake|
|272|snowman|
|273|soccer ball|
|274|sock|
|275|speedboat|
|276|spider|
|277|spoon|
|278|spreadsheet|
|279|square|
|280|squiggle|
|281|squirrel|
|282|stairs|
|283|star|
|284|steak|
|285|stereo|
|286|stethoscope|
|287|stitches|
|288|stop sign|
|289|stove|
|290|strawberry|
|291|streetlight|
|292|string bean|
|293|submarine|
|294|suitcase|
|295|sun|
|296|swan|
|297|sweater|
|298|swing set|
|299|sword|
|300|syringe|
|301|t-shirt|
|302|table|
|303|teapot|
|304|teddy-bear|
|305|telephone|
|306|television|
|307|tennis racquet|
|308|tent|
|309|The Eiffel Tower|
|310|The Great Wall of China|
|311|The Mona Lisa|
|312|tiger|
|313|toaster|
|314|toe|
|315|toilet|
|316|tooth|
|317|toothbrush|
|318|toothpaste|
|319|tornado|
|320|tractor|
|321|traffic light|
|322|train|
|323|tree|
|324|triangle|
|325|trombone|
|326|truck|
|327|trumpet|
|328|umbrella|
|329|underwear|
|330|van|
|331|vase|
|332|violin|
|333|washing machine|
|334|watermelon|
|335|waterslide|
|336|whale|
|337|wheel|
|338|windmill|
|339|wine bottle|
|340|wine glass|
|341|wristwatch|
|342|yoga|
|343|zebra|
|344|zigzag|
</details>
#### `sketch_rnn` and `sketch_rnn_full`
- `word`: Category the player was prompted to draw.
- `drawing`: An array of strokes. Strokes are represented as 3-tuples consisting of x-offset, y-offset, and a binary variable which is 1 if the pen is lifted between this position and the next, and 0 otherwise.
<details>
<summary>
Click here to see the code for visualizing drawings in Jupyter Notebook or Google Colab:
</summary>
```python
import numpy as np
import svgwrite # pip install svgwrite
from IPython.display import SVG, display
def draw_strokes(drawing, factor=0.045):
"""Displays vector drawing as SVG.
Args:
drawing: a list of strokes represented as 3-tuples
factor: scaling factor. The smaller the scaling factor, the bigger the SVG picture and vice versa.
"""
def get_bounds(data, factor):
"""Return bounds of data."""
min_x = 0
max_x = 0
min_y = 0
max_y = 0
abs_x = 0
abs_y = 0
for i in range(len(data)):
x = float(data[i, 0]) / factor
y = float(data[i, 1]) / factor
abs_x += x
abs_y += y
min_x = min(min_x, abs_x)
min_y = min(min_y, abs_y)
max_x = max(max_x, abs_x)
max_y = max(max_y, abs_y)
return (min_x, max_x, min_y, max_y)
data = np.array(drawing)
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in range(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
display(SVG(dwg.tostring()))
```
</details>
> **Note**: Sketch-RNN takes for input strokes represented as 5-tuples with drawings padded to a common maximum length and prefixed by the special start token `[0, 0, 1, 0, 0]`. The 5-tuple representation consists of x-offset, y-offset, and p_1, p_2, p_3, a binary one-hot vector of 3 possible pen states: pen down, pen up, end of sketch. More precisely, the first two elements are the offset distance in the x and y directions of the pen from the previous point. The last 3 elements represents a binary one-hot vector of 3 possible states. The first pen state, p1, indicates that the pen is currently touching the paper, and that a line will be drawn connecting the next point with the current point. The second pen state, p2, indicates that the pen will be lifted from the paper after the current point, and that no line will be drawn next. The final pen state, p3, indicates that the drawing has ended, and subsequent points, including the current point, will not be rendered.
><details>
> <summary>
> Click here to see the code for converting drawings to Sketch-RNN input format:
> </summary>
>
> ```python
> def to_sketch_rnn_format(drawing, max_len):
> """Converts a drawing to Sketch-RNN input format.
>
> Args:
> drawing: a list of strokes represented as 3-tuples
> max_len: maximum common length of all drawings
>
> Returns:
> NumPy array
> """
> drawing = np.array(drawing)
> result = np.zeros((max_len, 5), dtype=float)
> l = len(drawing)
> assert l <= max_len
> result[0:l, 0:2] = drawing[:, 0:2]
> result[0:l, 3] = drawing[:, 2]
> result[0:l, 2] = 1 - result[0:l, 3]
> result[l:, 4] = 1
> # Prepend special start token
> result = np.vstack([[0, 0, 1, 0, 0], result])
> return result
> ```
>
></details>
### Data Splits
In the configurations `raw`, `preprocessed_simplified_drawings` and `preprocessed_bitamps` (default configuration), all the data is contained in the training set, which has 50426266 examples.
`sketch_rnn` and `sketch_rnn_full` have the data split into training, validation and test split. In the `sketch_rnn` configuration, 75K samples (70K Training, 2.5K Validation, 2.5K Test) have been randomly selected from each category. Therefore, the training set contains 24150000 examples, the validation set 862500 examples and the test set 862500 examples. The `sketch_rnn_full` configuration has the full (training) data for each category, which leads to the training set having 43988874 examples, the validation set 862500 and the test set 862500 examples.
## Dataset Creation
### Curation Rationale
From the GitHub repository:
> The Quick Draw Dataset is a collection of 50 million drawings across [345 categories](categories.txt), contributed by players of the game [Quick, Draw!](https://quickdraw.withgoogle.com). The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. You can browse the recognized drawings on [quickdraw.withgoogle.com/data](https://quickdraw.withgoogle.com/data).
>
> We're sharing them here for developers, researchers, and artists to explore, study, and learn from
### Source Data
#### Initial Data Collection and Normalization
This dataset contains vector drawings obtained from [Quick, Draw!](https://quickdraw.withgoogle.com/), an online game where the players are asked to draw objects belonging to a particular object class in less than 20 seconds.
#### Who are the source language producers?
The participants in the [Quick, Draw!](https://quickdraw.withgoogle.com/) game.
### Annotations
#### Annotation process
The annotations are machine-generated and match the category the player was prompted to draw.
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
Some sketches are known to be problematic (see https://github.com/googlecreativelab/quickdraw-dataset/issues/74 and https://github.com/googlecreativelab/quickdraw-dataset/issues/18).
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
### Dataset Curators
Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim and Nick Fox-Gieg.
### Licensing Information
The data is made available by Google, Inc. under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```bibtex
@article{DBLP:journals/corr/HaE17,
author = {David Ha and
Douglas Eck},
title = {A Neural Representation of Sketch Drawings},
journal = {CoRR},
volume = {abs/1704.03477},
year = {2017},
url = {http://arxiv.org/abs/1704.03477},
archivePrefix = {arXiv},
eprint = {1704.03477},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HaE17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.5416160225868225,
-0.4514091908931732,
0.2969039976596832,
0.09332375228404999,
-0.33376893401145935,
0.17793256044387817,
-0.011398564092814922,
-0.38166528940200806,
0.8498923182487488,
0.42880779504776,
-0.7578997015953064,
-0.891176700592041,
-0.6370787024497986,
0.16675974428653717... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shwet/BBC_NEWS | shwet | 2022-06-14T02:38:56Z | 48 | 0 | null | [
"region:us"
] | 2022-06-14T02:38:56Z | 2022-06-14T02:37:02.000Z | 2022-06-14T02:37:02 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
naver-clova-ix/synthdog-ko | naver-clova-ix | 2022-07-22T06:43:08Z | 48 | 5 | null | [
"region:us"
] | 2022-07-22T06:43:08Z | 2022-07-20T00:45:45.000Z | 2022-07-20T00:45:45 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gigant/ted_descriptions | gigant | 2022-10-18T11:16:29Z | 48 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-18T11:16:29Z | 2022-10-18T10:24:43.000Z | 2022-10-18T10:24:43 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TED descriptions
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-generation
task_ids:
- language-modeling
dataset_info:
features:
- name: url
dtype: string
- name: descr
dtype: string
splits:
- name: train
num_bytes: 2617778
num_examples: 5705
download_size: 1672988
dataset_size: 2617778
---
# Dataset Card for TED descriptions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49895504117012024,
-0.5276524424552917,
0.4434092342853546,
0.16763539612293243,
-0.2422754317522049,
0.09670920670032501,
-0.05133720859885216,
-0.3146800100803375,
0.9031363129615784,
0.5311324000358582,
-0.9012489318847656,
-0.6991767287254333,
-0.3987157940864563,
-0.165768310427665... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pii-pile-chunk3-0-50000 | tomekkorbak | 2022-11-08T18:59:20Z | 48 | 0 | null | [
"region:us"
] | 2022-11-08T18:59:20Z | 2022-11-08T18:59:13.000Z | 2022-11-08T18:59:13 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/genia_ptm_event_corpus | bigbio | 2022-12-22T15:44:39Z | 48 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:44:39Z | 2022-11-13T22:08:36.000Z | 2022-11-13T22:08:36 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: PTM Events
homepage: http://www.geniaproject.org/other-corpora/ptm-event-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- EVENT_EXTRACTION
---
# Dataset Card for PTM Events
## Dataset Description
- **Homepage:** http://www.geniaproject.org/other-corpora/ptm-event-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,EE
Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework.
## Citation Information
```
@inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
}
```
| [
-0.2951801121234894,
-0.596399188041687,
0.37796396017074585,
-0.07606792449951172,
-0.46645137667655945,
-0.08630938082933426,
-0.3156798481941223,
-0.33952796459198,
0.3925005793571472,
0.3087151050567627,
-0.39211955666542053,
-0.754104733467102,
-0.7433071732521057,
0.511432945728302,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/english_wikipedia | lucadiliello | 2022-12-04T19:05:23Z | 48 | 1 | null | [
"region:us"
] | 2022-12-04T19:05:23Z | 2022-12-04T18:59:49.000Z | 2022-12-04T18:59:49 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: maintext
dtype: string
- name: source_domain
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 10569005563
num_examples: 4184712
download_size: 6144953788
dataset_size: 10569005563
---
# Dataset Card for "english_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7753030061721802,
-0.2993205487728119,
0.14103981852531433,
0.1593792885541916,
-0.2404938042163849,
-0.19674967229366302,
-0.1543998271226883,
-0.28142693638801575,
1.0018632411956787,
0.2567213773727417,
-0.7880792021751404,
-0.7920078635215759,
-0.5699225068092346,
0.0685820430517196... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/CLRS | jonathan-roberts1 | 2023-03-31T15:35:22Z | 48 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T15:35:22Z | 2023-02-15T16:46:17.000Z | 2023-02-15T16:46:17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airport
'1': bare land
'2': beach
'3': bridge
'4': commercial
'5': desert
'6': farmland
'7': forest
'8': golf course
'9': highway
'10': industrial
'11': meadow
'12': mountain
'13': overpass
'14': park
'15': parking
'16': playground
'17': port
'18': railway
'19': railway station
'20': residential
'21': river
'22': runway
'23': stadium
'24': storage tank
splits:
- name: train
num_bytes: 2969926932
num_examples: 15000
download_size: 2327956775
dataset_size: 2969926932
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "CLRS"
## Dataset Description
- **Paper** [CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
```
@article{s20041226,
title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification},
author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao},
year = 2020,
journal = {Sensors},
volume = 20,
number = 4,
doi = {10.3390/s20041226},
issn = {1424-8220},
url = {https://www.mdpi.com/1424-8220/20/4/1226},
article-number = 1226,
pubmedid = 32102294,
}
``` | [
-0.41305968165397644,
-0.009914650581777096,
0.27460652589797974,
0.0493469275534153,
-0.43216392397880554,
-0.16959604620933533,
-0.06440865993499756,
-0.4480573236942291,
-0.7558835744857788,
0.36906516551971436,
-0.5544948577880859,
-0.7158163189888,
-0.2940465211868286,
0.1676827818155... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
urialon/summ_screen_test | urialon | 2023-02-28T16:39:55Z | 48 | 0 | null | [
"region:us"
] | 2023-02-28T16:39:55Z | 2023-02-28T16:39:49.000Z | 2023-02-28T16:39:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/python-bugs | Muennighoff | 2023-03-22T07:46:03Z | 48 | 4 | null | [
"region:us"
] | 2023-03-22T07:46:03Z | 2023-03-22T07:45:19.000Z | 2023-03-22T07:45:19 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grosenthal/latin_english_parallel | grosenthal | 2023-04-28T02:11:31Z | 48 | 3 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:la",
"language:en",
"license:mit",
"region:us"
] | 2023-04-28T02:11:31Z | 2023-04-07T21:09:52.000Z | 2023-04-07T21:09:52 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 39252644
num_examples: 99343
- name: test
num_bytes: 405056
num_examples: 1014
- name: valid
num_bytes: 392886
num_examples: 1014
download_size: 25567350
dataset_size: 40050586
license: mit
task_categories:
- translation
language:
- la
- en
pretty_name: Latin to English Translation Pairs
size_categories:
- 10K<n<100K
---
# Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences. Additionally, the English translations were both 1. copyrighted and 2. outdated. As such, we decided to modernize and transform them into ones that could be used in the public domain, as the original Latin is not copyrighted.
To perform this, we used the gpt3.5-turbo model on OpenAI with the prompt `Translate an old dataset from the 1800s to modern English while preserving the original meaning and exact same sentence structure. Retain extended adjectives, dependent clauses, and punctuation. Output the translation preceded by the text "Modern Translation: ". If a given translation is not a complete sentence, repeat the input sentence. \n'` followed by the source English.
We then manually corrected all outputs that did not conform to the standard.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 | [
-0.1842947006225586,
-0.431439608335495,
0.3027779757976532,
0.3002490997314453,
-0.48713594675064087,
-0.32369545102119446,
-0.41011640429496765,
-0.4975346326828003,
0.4867520034313202,
0.6235206127166748,
-0.30948397517204285,
-0.5917161703109741,
-0.26187217235565186,
0.597795307636261... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuhaotian/LLaVA-CC3M-Pretrain-595K | liuhaotian | 2023-07-06T08:51:35Z | 48 | 42 | null | [
"language:en",
"license:other",
"region:us"
] | 2023-07-06T08:51:35Z | 2023-04-20T14:28:12.000Z | 2023-04-20T14:28:12 | ---
license: other
language:
- en
pretty_name: LLaVA CC3M Pretrain 595K
---
# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.
**Dataset structure:**
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption. Note that ~10% of the samples are not associated with BLIP caption yet in this release.
- `images.zip` contains all raw images of the filtered subset from CC-3M. **Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload `images.zip` for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the CC-3M license. This may be taken down when requested by the original CC-3M dataset owner or owners of the referenced images.**
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | [
-0.18038210272789001,
-0.5136120319366455,
0.3349539637565613,
0.24080033600330353,
-0.5250299572944641,
0.08735639601945877,
-0.18460221588611603,
-0.5527102947235107,
0.23658838868141174,
0.5798306465148926,
-0.7962904572486877,
-0.6643292307853699,
-0.46283426880836487,
0.08483318239450... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
turkish-nlp-suite/turkish-wikiNER | turkish-nlp-suite | 2023-09-26T10:37:00Z | 48 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-09-26T10:37:00Z | 2023-05-07T16:41:04.000Z | 2023-05-07T16:41:04 | ---
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Turkish-WikiNER
---
# Dataset Card for "turkish-nlp-suite/turkish-wikiNER"
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/wiki.png" width="20%" height="20%">
## Dataset Description
- **Repository:** [Turkish-WikiNER](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset)
- **Paper:** [ACL link]()
- **Dataset:** Turkish-WikiNER
- **Domain:** Wiki
- **Number of Labels:** 18
### Dataset Summary
Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from [Kuzgunlar NER dataset](https://data.mendeley.com/datasets/cdcztymf4k/1).
Annotations are done by [Co-one](https://co-one.co/). Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"tokens": ["Çekimler", "5", "Temmuz", "2005", "tarihinde", "Reebok", "Stadyum", ",", "Bolton", ",", "İngiltere'de", "yapılmıştır", "."],
"tags": [O", "B-DATE", "I-DATE", "I-DATE", "O", "B-FAC", "I-FAC", "O", "B-GPE", "O", "B-GPE", "O", "O"]
}
```
or even better:

### Labels
- CARDINAL
- DATE
- EVENT
- FAC
- GPE
- LANGUAGE
- LAW
- LOC
- MONEY
- NORP
- ORDINAL
- ORG
- PERCENT
- PERSON
- PRODUCT
- QUANTITY
- TIME
- TITLE
- WORK_OF_ART
### Data Split
| name |train|validation|test|
|---------|----:|---------:|---:|
|Turkish-WikiNER|18000| 1000|1000|
### Citation
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) :
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
| [
-0.626724123954773,
-0.5838914513587952,
-0.13672520220279694,
0.15575382113456726,
-0.3752223253250122,
-0.35913512110710144,
-0.531408965587616,
-0.42584800720214844,
0.346770316362381,
0.43845367431640625,
-0.49961772561073303,
-0.8239309787750244,
-0.6277239918708801,
0.461329817771911... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/tracie | tasksource | 2023-05-31T08:26:23Z | 48 | 2 | null | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"nli",
"region:us"
] | 2023-05-31T08:26:23Z | 2023-05-25T07:17:09.000Z | 2023-05-25T07:17:09 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- nli
---
https://github.com/allenai/aristo-leaderboard/tree/master/tracie/data
```
@inproceedings{ZRNKSR21,
author = {Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth},
title = {Temporal Reasoning on Implicit Events from Distant Supervision},
booktitle = {NAACL},
year = {2021},
}
``` | [
-0.02551010623574257,
-0.707141637802124,
0.9142193794250488,
0.1376560479402542,
-0.11576738208532333,
-0.0028399962466210127,
0.040662869811058044,
-0.8048762083053589,
0.2324824035167694,
0.16130907833576202,
-0.9573745727539062,
-0.8007326722145081,
-0.6409977674484253,
-0.193498417735... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TigerResearch/sft_zh | TigerResearch | 2023-06-09T12:21:42Z | 48 | 26 | null | [
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-06-09T12:21:42Z | 2023-06-09T10:15:22.000Z | 2023-06-09T10:15:22 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调中文sft-zh数据合集
本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_zh')
```
## 文件细分
| 类型 | 语言 | 数据集文件 | 数量
| ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| alpaca 中文 | 中文 | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-alpaca-zh-0.5m.json) | 0.5m |
| 百科问答 | 中文 | [tigerbot-wiki-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-wiki-qa-zh-1k.json) | 1k |
| 名著问答 | 中文 | [tigerbot-book-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-book-qa-1k.json) | 1k |
| 猜谜语 | 中文 | [tigerbot-riddle-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-riddle-qa-1k.json) | 1k |
| 阅读理解 | 中文 | [tigerbot-superclue-c3-zh-5k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-superclue-c3-zh-5k.json) | 5k |
| 问答 | 中文 | [tigerbot-hc3-zh-12k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-hc3-zh-12k.json) | 12k |
| 知乎问答 | 中文 | [tigerbot-zhihu-zh-10k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-zhihu-zh-10k.json) | 10k |
| [
-0.5407467484474182,
-0.3538060784339905,
0.15468284487724304,
0.21134479343891144,
-0.4375561773777008,
0.014040830545127392,
-0.1542574167251587,
-0.3082500398159027,
0.7508480548858643,
0.4712340533733368,
-0.47971072793006897,
-0.8371909260749817,
-0.4294067621231079,
0.336960226297378... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PKU-Alignment/BeaverTails-Evaluation | PKU-Alignment | 2023-07-20T15:31:24Z | 48 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-4.0",
"safe",
"safety",
"ai-safety",
"moderation",
"rejection-sampling",
"llm",
"lm",
"human-feedback",
"arxiv:2307.04657",
"region:us"
] | 2023-07-20T15:31:24Z | 2023-06-14T16:14:14.000Z | 2023-06-14T16:14:14 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
tags:
- safe
- safety
- ai-safety
- moderation
- rejection-sampling
- llm
- lm
- human-feedback
size_categories:
- n<1K
---
# Dataset Card for BeaverTails-Evaluation
BeaverTails is an AI safety-focused collection comprising a series of datasets.
This repository contains test prompts specifically designed for evaluating language model safety.
It is important to note that although each prompt can be connected to multiple categories, only one category is labeled for each prompt.
The 14 harm categories are defined as follows:
1. **Animal Abuse:** This involves any form of cruelty or harm inflicted on animals, including physical abuse, neglect, or any behavior that causes distress or harm to an animal. It also includes content that promotes such behavior.
1. **Child Abuse:** This encompasses any form of physical, emotional, or sexual abuse directed toward children. It can also include neglect, exploitation, or any behavior that harms a child or violates their rights. Content that promotes or glorifies such behavior also falls under this category.
1. **Controversial Topics, Politics:** This refers to discussions or statements related to controversial topics, political ideologies, or sensitive issues that could lead to disagreement or conflict. These topics can be divisive and could potentially cause harm if not handled respectfully and thoughtfully.
1. **Discrimination, Stereotype, Injustice:** Discrimination refers to treating people unfairly or unequally based on their personal characteristics or affiliations such as race, age, gender, religion, etc. Stereotyping involves holding simplified and generalized beliefs about particular groups, which often leads to prejudices and discrimination. Injustice pertains to unfair treatment or violation of the rights of individuals or groups.
1. **Drug Abuse, Weapons, Banned Substance:** This category includes the misuse or abuse of illegal drugs, prescription drugs, or any other banned substances. It also involves the unauthorized or illegal possession, use, or trade of weapons.
1. **Financial Crime, Property Crime, Theft:** This includes all forms of illegal activities that lead to financial loss such as fraud, embezzlement, money laundering, bribery, and other forms of corruption. Property crime and theft involve the act of taking someone's property without their consent.
1. **Hate Speech, Offensive Language:** This category encompasses any form of communication or content that expresses intense hostility or violence towards an individual or group based on aspects like race, religion, ethnicity, nationality, gender, sexual orientation, or any other defining characteristics. This also includes language that is considered vulgar, crude, or offensive.
1. **Misinformation Regarding ethics, laws, and safety:** This category involves the spreading of incorrect or misleading information related to ethical issues, laws, regulations, or any matter related to safety. Misinformation can cause harm by influencing people's beliefs or actions negatively.
1. **Non-Violent Unethical Behavior:** This refers to any behavior that is morally or ethically wrong, but does not involve violence. Examples can include dishonest behaviors like lying, cheating, deceiving, spreading rumors, betraying trust, or any other form of unethical conduct.
1. **Privacy Violation:** This refers to actions that invade an individual's privacy, such as unauthorized access, use, disclosure, or collection of personal data. It also includes stalking, doxing, or any other form of intrusion into someone's personal life without their consent.
1. **Self-Harm:** This includes any form of self-inflicted harm, whether physical or emotional, or any content that promotes, glorifies, or encourages self-harm. This can also include discussions of suicidal thoughts or plans.
1. **Sexually Explicit, Adult Content:** This category includes any content that depicts explicit sexual activities or adult themes that are considered inappropriate or offensive. This can include pornography, explicit sexual text or messages, and any other form of sexually explicit material.
1. **Terrorism, Organized Crime:** This pertains to any form of content or action related to terrorism or organized crime, including endorsing or promoting terrorist activities, participating in organized criminal activities, or spreading propaganda for such groups.
1. **Violence, Aiding and Abetting, Incitement:** This involves any form of physical harm, threat, or violent behavior towards individuals or groups. Aiding and abetting refers to the act of helping, supporting, or encouraging such violent behaviors or illegal activities. Incitement pertains to the act of provoking or stirring up harmful, violent, or illegal actions.
**Disclaimer**: The BeaverTails dataset and its family contain content that may be offensive or upsetting.
Topics covered in the dataset include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially distressing subject matter.
Please engage with the dataset responsibly and in accordance with your own personal risk tolerance.
The dataset is intended for research purposes, specifically for research aimed at creating safer and less harmful AI systems.
The views and opinions expressed in the dataset do not represent the views of the PKU-Alignment Team or any of its members.
It is important to emphasize that the dataset should not be used for training dialogue agents, as doing so may likely result in harmful model behavior.
The primary objective of this dataset is to facilitate research that could minimize or prevent the harm caused by AI systems.
## Usage
The code snippet below demonstrates how to load the evaluation dataset:
```python
from datasets import load_dataset
# Load the whole dataset
dataset = load_dataset('PKU-Alignment/BeaverTails-Evaluation')
# Load only the v1 dataset
round0_dataset = load_dataset('PKU-Alignment/BeaverTails', data_dir='v1')
```
## Paper
You can find more information in our paper.
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657>
## Contact
The original authors host this dataset on GitHub here: https://github.com/PKU-Alignment/beavertails
## License
BeaverTails dataset and its family are released under the CC BY-NC 4.0 License.
| [
-0.14795473217964172,
-0.9271674156188965,
0.23045127093791962,
0.40526464581489563,
0.05200008302927017,
-0.0006605747039429843,
0.11850930750370026,
-0.4482070803642273,
0.15787789225578308,
0.5469726324081421,
-0.7892377376556396,
-0.5182262063026428,
-0.6849561333656311,
0.138179644942... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
divergente/wikitext-ptbr-1 | divergente | 2023-06-15T01:24:23Z | 48 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:pt",
"license:cc-by-sa-3.0",
"license:gfdl",
"region... | 2023-06-15T01:24:23Z | 2023-06-15T00:46:30.000Z | 2023-06-15T00:46:30 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- pt
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
pretty_name: WikiTextPtBr
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
- config_name: wikitext-ptbr
features:
- name: text
dtype: string
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philippemo/dummy_dataset_without_schema_12_06 | philippemo | 2023-07-03T11:26:35Z | 48 | 0 | null | [
"region:us"
] | 2023-07-03T11:26:35Z | 2023-07-03T11:26:27.000Z | 2023-07-03T11:26:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rbiojout/odoo_python_15 | rbiojout | 2023-07-06T15:42:22Z | 48 | 0 | null | [
"license:agpl-3.0",
"region:us"
] | 2023-07-06T15:42:22Z | 2023-07-06T14:50:12.000Z | 2023-07-06T14:50:12 | ---
license: agpl-3.0
dataset_info:
features:
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: branch
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
splits:
- name: train
num_bytes: 40721998
num_examples: 9349
download_size: 12783255
dataset_size: 40721998
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svjack/cmmlu_ed | svjack | 2023-07-24T06:56:54Z | 48 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"chinese",
"llm",
"evaluation",
"arxiv:2306.09212",
"region:us"
] | 2023-07-24T06:56:54Z | 2023-07-24T06:30:20.000Z | 2023-07-24T06:30:20 | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
| [
-0.3426707983016968,
-0.7962832450866699,
0.4867807626724243,
0.2306753247976303,
-0.20297156274318695,
-0.0886649414896965,
-0.5449652075767517,
-0.05252194032073021,
0.15940327942371368,
0.2757505178451538,
-0.4781206548213959,
-0.7366830706596375,
-0.5766618251800537,
0.1281528025865554... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AISE-TUDelft/PY150k | AISE-TUDelft | 2023-08-11T11:47:47Z | 48 | 0 | null | [
"size_categories:100K<n<1M",
"license:cc0-1.0",
"code",
"region:us"
] | 2023-08-11T11:47:47Z | 2023-08-11T09:28:32.000Z | 2023-08-11T09:28:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: index
dtype: int64
- name: input
dtype: string
- name: gt
dtype: string
- name: hash
dtype: int64
- name: full_line
dtype: string
splits:
- name: train
num_bytes: 662931126
num_examples: 95000
- name: dev
num_bytes: 41218084
num_examples: 5000
- name: test
num_bytes: 343336086
num_examples: 50000
download_size: 277005224
dataset_size: 1047485296
license: cc0-1.0
tags:
- code
pretty_name: PY150 Line Completion Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for "PY150k"
## Dataset Summary
Code Completion dataset created from the code available in [CodeXGlue](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line).
| [
-0.5660997033119202,
-0.04260249808430672,
0.05470594763755798,
0.32600727677345276,
-0.08192719519138336,
0.21972842514514923,
0.11410997807979584,
-0.12369246780872345,
0.5946959853172302,
0.6444361805915833,
-0.7730637788772583,
-0.6682470440864563,
-0.03581668436527252,
0.0371712036430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pszemraj/simple_wikipedia | pszemraj | 2023-09-09T14:54:54Z | 48 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"language modeling",
"lamguage",
"2023 data",
"region:us"
] | 2023-09-09T14:54:54Z | 2023-09-03T02:32:33.000Z | 2023-09-03T02:32:33 | ---
license: apache-2.0
task_categories:
- text-generation
- fill-mask
language:
- en
tags:
- language modeling
- lamguage
- 2023 data
size_categories:
- 100K<n<1M
---
# simple wikipedia
the 'simple' split of Wikipedia, from Sept 1 2023. The train split contains about 65M tokens,
Pulled via:
```python
dataset = load_dataset(
"wikipedia", language="simple", date="20230901", beam_runner="DirectRunner"
)
```
## stats
### train split
general info
```
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 226242 entries, 0 to 226241
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 226242 non-null string
1 url 226242 non-null string
2 title 226242 non-null string
3 text 226242 non-null string
dtypes: string(4)
```
token length (NeoX)

| | tokens |
|:------|--------------:|
| count | 226242 |
| mean | 287.007 |
| std | 1327.07 |
| min | 1 |
| 25% | 65 |
| 50% | 126 |
| 75% | 243 |
| max | 60844 | | [
-0.7743191123008728,
-0.4814794063568115,
0.26964887976646423,
0.02472751773893833,
-0.5396811366081238,
-0.3492237627506256,
-0.12046418339014053,
-0.353574275970459,
0.7156521677970886,
0.34417232871055603,
-0.6712586879730225,
-0.5298253297805786,
-0.5761687755584717,
0.472810298204422,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hakanssonjesper/dataset-llama | hakanssonjesper | 2023-10-01T16:39:18Z | 48 | 0 | null | [
"region:us"
] | 2023-10-01T16:39:18Z | 2023-09-15T14:21:39.000Z | 2023-09-15T14:21:39 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 65284594.45526487
num_examples: 45592
- name: validation
num_bytes: 16322580.544735134
num_examples: 11399
download_size: 38476271
dataset_size: 81607175.0
---
# Dataset Card for "dataset-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47499698400497437,
-0.19942760467529297,
0.26584458351135254,
0.3992401957511902,
-0.4972754716873169,
0.2734453082084656,
0.46365559101104736,
-0.2993345260620117,
1.076555848121643,
0.5465254187583923,
-0.8062742352485657,
-0.8279383182525635,
-0.8106445670127869,
-0.04562763124704361... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DanFosing/public-domain-poetry | DanFosing | 2023-09-24T11:48:44Z | 48 | 3 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-09-24T11:48:44Z | 2023-09-16T20:46:31.000Z | 2023-09-16T20:46:31 | ---
task_categories:
- text-generation
pretty_name: public-domain-poetry
language:
- en
size_categories:
- 10K<n<100K
license: cc0-1.0
---
# Overview
This dataset is a collection of approximately 38,500 poems from https://www.public-domain-poetry.com/.
## Language
The language of this dataset is English.
## License
All data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so. | [
-0.1858246773481369,
-0.01269710622727871,
0.21546202898025513,
0.5309693813323975,
-0.6156965494155884,
-0.218189537525177,
-0.3100173771381378,
-0.6245253086090088,
0.35489270091056824,
1.0357223749160767,
-0.5881581902503967,
-0.7470169067382812,
-0.7586756348609924,
0.46555474400520325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/CochranePLS | hippocrates | 2023-09-20T16:38:53Z | 48 | 0 | null | [
"region:us"
] | 2023-09-20T16:38:53Z | 2023-09-20T16:38:50.000Z | 2023-09-20T16:38:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6321
num_examples: 1
- name: test
num_bytes: 1279406
num_examples: 200
download_size: 669610
dataset_size: 1285727
---
# Dataset Card for "CochranePLS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2958182692527771,
0.19438740611076355,
0.36022961139678955,
0.30036067962646484,
-0.28825873136520386,
-0.18221499025821686,
0.3609367311000824,
0.02350497804582119,
0.818279504776001,
0.4896845817565918,
-0.9171529412269592,
-0.8020021319389343,
-0.7094899415969849,
-0.1192131042480468... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/medQA_test | hippocrates | 2023-10-12T16:27:59Z | 48 | 0 | null | [
"region:us"
] | 2023-10-12T16:27:59Z | 2023-10-03T12:12:56.000Z | 2023-10-03T12:12:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 11533105
num_examples: 10178
- name: valid
num_bytes: 1440369
num_examples: 1272
- name: test
num_bytes: 1468137
num_examples: 1273
download_size: 6901097
dataset_size: 14441611
---
# Dataset Card for "medQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6312817931175232,
-0.26816895604133606,
0.1967790722846985,
-0.011574024334549904,
-0.11152879893779755,
-0.02080894261598587,
0.442226767539978,
-0.025404304265975952,
0.768474280834198,
0.47682976722717285,
-0.8653227090835571,
-0.6757143139839172,
-0.4090264141559601,
-0.200065299868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/medMCQA_test | hippocrates | 2023-10-12T16:37:02Z | 48 | 0 | null | [
"region:us"
] | 2023-10-12T16:37:02Z | 2023-10-03T12:29:51.000Z | 2023-10-03T12:29:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 75747167
num_examples: 182822
- name: valid
num_bytes: 1799482
num_examples: 4183
- name: test
num_bytes: 1799482
num_examples: 4183
download_size: 26861287
dataset_size: 79346131
---
# Dataset Card for "medMCQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6483585238456726,
-0.2558145225048065,
0.2663162648677826,
-0.020067838951945305,
-0.15882809460163116,
0.07462326437234879,
0.4961980879306793,
0.0574323795735836,
0.7161453366279602,
0.4952920377254486,
-0.9443637728691101,
-0.7104121446609497,
-0.4579249620437622,
-0.2403555363416671... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matthewfranglen/aste-v2 | matthewfranglen | 2023-10-09T10:05:10Z | 48 | 0 | null | [
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"arxiv:1911.01616",
"arxiv:2107.12214",
"arxiv:2010.02609",
"region:us"
] | 2023-10-09T10:05:10Z | 2023-10-07T18:45:31.000Z | 2023-10-07T18:45:31 | ---
pretty_name: "Aspect Sentiment Triplet Extraction v2"
language:
- en
arxiv:
- 2107.12214
- 2010.02609
- 1911.01616
size_categories:
- 1K<n<10K
task_categories:
- token-classification
- text-classification
configs:
- config_name: 2014-laptop-sem-eval
data_files:
- split: train
path: "data/2014/laptop/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/laptop/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/laptop/sem-eval/test.gz.parquet"
- config_name: 2014-laptop-aste-v2
data_files:
- split: train
path: "data/2014/laptop/aste/train.gz.parquet"
- split: valid
path: "data/2014/laptop/aste/valid.gz.parquet"
- split: test
path: "data/2014/laptop/aste/test.gz.parquet"
- config_name: 2014-restaurant-sem-eval
data_files:
- split: train
path: "data/2014/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/sem-eval/test.gz.parquet"
- config_name: 2014-restaurant-aste-v2
data_files:
- split: train
path: "data/2014/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/aste/test.gz.parquet"
- config_name: 2015-restaurant-sem-eval
data_files:
- split: train
path: "data/2015/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/sem-eval/test.gz.parquet"
- config_name: 2015-restaurant-aste-v2
data_files:
- split: train
path: "data/2015/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/aste/test.gz.parquet"
- config_name: 2016-restaurant-sem-eval
data_files:
- split: train
path: "data/2016/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/sem-eval/test.gz.parquet"
- config_name: 2016-restaurant-aste-v2
data_files:
- split: train
path: "data/2016/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/aste/test.gz.parquet"
---
## Dataset Description
### Task Summary
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
This task is firstly proposed by (Peng et al., 2020) in the paper [Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)](https://arxiv.org/abs/1911.01616).
For Example, given the sentence:
> The screen is very large and crystal clear with amazing colors and resolution .
The objective of the Aspect Sentiment Triplet Extraction (ASTE) task is to predict the triplets:
> [('screen', 'large', 'Positive'), ('screen', 'clear', 'Positive'), ('colors', 'amazing', 'Positive'), ('resolution', 'amazing', 'Positive')]
where a triplet consists of (target, opinion, sentiment).
### Dataset Summary
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint.
The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service).
By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect.
This dataset consists of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect.
### Dataset Source
The ASTE dataset is from the [xuuuluuu/SemEval-Triplet-data](https://github.com/xuuuluuu/SemEval-Triplet-data) repository.
It is based on the Sem Eval 2014, 2015 and 2016 datasets, with some preprocessing applied to the text.
* [Sem Eval 2014 Task 4](https://alt.qcri.org/semeval2014/task4/)
* [Sem Eval 2015 Task 12](https://alt.qcri.org/semeval2015/task12/)
* [Sem Eval 2016 Task 5](https://alt.qcri.org/semeval2016/task5/)
### Dataset Details
The train, validation and test splits come from the ASTE dataset.
There are the following columns:
* index
The ASTE and Sem Eval datasets had multiple annotations per document.
This dataset has a single annotation per row.
To make it easier to collect all annotations for a document the index can be used to group them.
All annotations for a given document will have the same index.
* text
This is the document that is annotated, either in the ASTE form or in the Sem Eval form (see below for details).
* aspect_start_index
The zero based character index for the first letter of the aspect term
* aspect_end_index
The zero based character index for the last letter of the aspect term
* aspect_term
The aspect term as it appears in the text
* opinion_start_index
The zero based character index for the first letter of the opinion term
* opinion_end_index
The zero based character index for the last letter of the opinion term
* opinion_term
The opinion term as it appears in the text
* sentiment
The sentiment class for the opinion about the aspect.
One of _negative_, _neutral_ or _positive_.
The ASTE dataset involved preprocessing the SemEval text.
This preprocessing fixed some of the spelling mistakes, for example:
> Keyboard good sized and wasy to use.
(easy misspelt as wasy).
The preprocessing also includes tokenization of the text and then separating the tokens with whitespace, for example:
> It 's just as fast with one program open as it is with sixteen open .
Since the added whitespace can lead to unnatrual text I have provided two forms of the dataset.
Subsets that end with `aste-v2` have the preprocessed text with spelling correction and additional whitespace.
Subsets that end with `sem-eval` have the original Sem Eval text.
### Citation Information
```
@misc{xu2021learning,
title={Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Yew Ken Chia and Lidong Bing},
year={2021},
eprint={2107.12214},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{xu2021positionaware,
title={Position-Aware Tagging for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Hao Li and Wei Lu and Lidong Bing},
year={2021},
eprint={2010.02609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{peng2019knowing,
title={Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis},
author={Haiyun Peng and Lu Xu and Lidong Bing and Fei Huang and Wei Lu and Luo Si},
year={2019},
eprint={1911.01616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.49414369463920593,
-0.7035523056983948,
0.5185657739639282,
0.4850981831550598,
-0.14776477217674255,
0.05821467563509941,
-0.19780635833740234,
-0.3954339027404785,
0.43469277024269104,
0.6713408827781677,
-0.47486960887908936,
-0.7816386818885803,
-0.355114221572876,
0.217279419302940... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shossain/merged-no-pad-16384 | shossain | 2023-11-06T22:00:57Z | 48 | 0 | null | [
"region:us"
] | 2023-11-06T22:00:57Z | 2023-10-16T00:46:28.000Z | 2023-10-16T00:46:28 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1226064029
num_examples: 6401
download_size: 336720252
dataset_size: 1226064029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "merged-no-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.936918318271637,
-0.16407354176044464,
0.23572875559329987,
0.31839847564697266,
-0.49052590131759644,
0.10935601592063904,
0.3950953185558319,
-0.11325779557228088,
1.1238960027694702,
0.7310417890548706,
-0.7868067622184753,
-0.6186903715133667,
-0.6015720367431641,
-0.200222313404083... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crumb/c4-benchfilter-nano | crumb | 2023-10-22T19:22:56Z | 48 | 2 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language_creators:found",
"size_categories:100K<n<1M",
"source_datasets:c4",
"language:en",
"license:odc-by",
"region:us"
] | 2023-10-22T19:22:56Z | 2023-10-22T06:20:10.000Z | 2023-10-22T06:20:10 | ---
language_creators:
- found
language:
- en
license: odc-by
source_datasets:
- c4
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 373897649.51453334
num_examples: 278115
download_size: 242478448
dataset_size: 373897649.51453334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
size_categories:
- 100K<n<1M
---
# crumb/c4-benchfilter-nano
A 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data.
The estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
on 1k samples, within the first 3M samples of C4. The top scoring sample
datasets for each benchmark are then filtered again for top 30% scores and
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed
because they likely have exact large n-token matches by chance such as exact
dates or times that aren't actually relevant to the data.\*
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training. | [
-0.6572263836860657,
-0.4575559198856354,
0.3316776752471924,
-0.09074116498231888,
-0.28960689902305603,
0.18179310858249664,
-0.4458237290382385,
-0.4097593128681183,
-0.16950710117816925,
0.43971797823905945,
-0.5032109022140503,
-0.7320159673690796,
-0.50159752368927,
-0.13541172444820... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/sharegpt-binary | atmallen | 2023-10-23T21:50:35Z | 48 | 0 | null | [
"region:us"
] | 2023-10-23T21:50:35Z | 2023-10-23T05:40:21.000Z | 2023-10-23T05:40:21 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: model
dtype: string
splits:
- name: test
num_bytes: 1090167
num_examples: 243
download_size: 188810
dataset_size: 1090167
---
# Dataset Card for "sharegpt-binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7125837206840515,
-0.24130183458328247,
0.1982470154762268,
0.3743504583835602,
-0.3585231602191925,
0.09255310893058777,
0.3091762959957123,
-0.2228080928325653,
0.8418192863464355,
0.2340928018093109,
-0.8761101961135864,
-0.7312291860580444,
-0.9061583280563354,
-0.4563017189502716,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
19kmunz/iot-23-preprocessed | 19kmunz | 2023-11-03T17:12:47Z | 48 | 0 | null | [
"task_categories:question-answering",
"task_categories:tabular-classification",
"language:en",
"code",
"region:us"
] | 2023-11-03T17:12:47Z | 2023-10-27T16:39:01.000Z | 2023-10-27T16:39:01 | ---
dataset_info:
features:
- name: id.orig_p
dtype: int64
- name: id.resp_p
dtype: int64
- name: proto
dtype: string
- name: service
dtype: string
- name: duration
dtype: float64
- name: orig_bytes
dtype: int64
- name: resp_bytes
dtype: int64
- name: conn_state
dtype: string
- name: missed_bytes
dtype: int64
- name: history
dtype: string
- name: orig_pkts
dtype: int64
- name: orig_ip_bytes
dtype: int64
- name: resp_pkts
dtype: int64
- name: resp_ip_bytes
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 93994789
num_examples: 819024
download_size: 11805369
dataset_size: 93994789
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- tabular-classification
language:
- en
tags:
- code
pretty_name: d
---
# Aposemat IoT-23 - a Labeled Dataset with Malcious and Benign Iot Network Traffic
**Homepage:** [https://www.stratosphereips.org/datasets-iot23](https://www.stratosphereips.org/datasets-iot23)
This dataset contains a subset of the data from 20 captures of Malcious network traffic and 3 captures from live Benign Traffic on Internet of Things (IoT) devices. Created by Sebastian Garcia, Agustin Parmisano, & Maria Jose Erquiaga at the Avast AIC laboratory with the funding of Avast Software, this dataset is one of the best in the field for Intrusion Detection Systems (IDS) for IoT Devices [(Comparative Analysis of IoT Botnet Datasets)](https://doi.org/10.53070/bbd.1173687).
The selection of the subset was determined by [Aqeel Ahmed on Kaggle](https://www.kaggle.com/datasets/engraqeel/iot23preprocesseddata) and contains 6 million samples. The Kaggle upload, nor this one, have employed data balancing. The Kaggle card does not contain methodology to understand what criteria was used to select these samples. If you want ensure best practice, use this dataset to mock-up processing the data into a model before using the full dataset with data balancing. This will require processing the 8GB of conn.log.labelled files.
This dataset only notes if the data is Malcious or Benign. The original dataset labels the type of malcious traffic aswell. This means this processing of the dataset is only suited for binary classification.
# Feature information:
All features originate from the [Zeek](https://docs.zeek.org/en/master/scripts/base/protocols/conn/main.zeek.html#type-Conn::Info) processing performed by the dataset creators. [See notes here for caviats for each column](https://docs.zeek.org/en/master/scripts/base/protocols/conn/main.zeek.html#type-Conn::Info).
<details>
<summary>Expand for feature names, descriptions, and datatypes</summary>
Name: id.orig_p
Description: The originator’s port number.
Data type: int64 - uint64 in original
Name: id.resp_p
Description: The responder’s port number.
Data type: int64 - uint64 in original
Name: proto
Description: The transport layer protocol of the connection.
Data type: string - enum(unknown_transport, tcp, udp, icmp). Only TCP and UDP in subset
Name: service
Description: An identification of an application protocol being sent over the connection.
Data type: optional string
Name: duration
Description: How long the connection lasted.
Data type: optional float64 - time interval
Name: orig_bytes
Description: The number of payload bytes the originator sent.
Data type: optional int64 - uint64 in original
Name: resp_bytes
Description:The number of payload bytes the responder sent.
Data type: optional int64 - uint64 in original
Name: conn_state
Description: Value indicating connection state. (S0, S1, SF, REJ, S2, S3, RSTO, RSTR, RSTOS0, RSTRH, SH, SHR, OTH)
Data type: optional string
Name: missed_bytes
Description: Indicates the number of bytes missed in content gaps, which is representative of packet loss.
Data type: optional int64 - uint64 in original. default = 0
Name: history
Description: Records the state history of connections as a string of letters.
Data type: optional string
Name: orig_pkts
Description: Number of packets that the originator sent.
Data type: optional int64 - uint64 in original
Name: orig_ip_bytes
Description: Number of IP level bytes that the originator sent.
Data type: optional int64 - uint64 in original
Name: resp_pkts
Description: Number of packets that the responder sent.
Data type: optional int64 - uint64 in original
Name: resp_ip_bytes
Description: Number of IP level bytes that the responder sent.
Data type: optional int64 - uint64 in original
Name: label
Description: Specifies if data point is benign or some form of malicious. See the dataset creators paper for descriptions of attack types
Data type: string - enum(Malicious, Benign)
NOTE: ts, uid, id.orig_h, id.resp_h have been removed as they are dataset specific. Models should not be trained with specific timestamps or IP addresses (id.orig_h) using this dataset, as that can lead to over fitting to dataset specific times and addresses.
Further local_orig, local_resp have been removed as they are null in all rows, so they are useless for training.
</details>
## Citation
If you are using this dataset for your research, please reference it as “Sebastian Garcia, Agustin Parmisano, & Maria Jose Erquiaga. (2020). IoT-23: A labeled dataset with malicious and benign IoT network traffic (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4743746” | [
-0.4861016571521759,
-0.7313993573188782,
-0.16484564542770386,
0.18938690423965454,
-0.1948198676109314,
-0.07079160213470459,
0.28354987502098083,
-0.5960379838943481,
0.4378036558628082,
0.6847277879714966,
-0.5465668439865112,
-0.40415388345718384,
-0.29402515292167664,
0.0480733513832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
princeton-nlp/LLMBar | princeton-nlp | 2023-10-29T15:35:34Z | 48 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-29T15:35:34Z | 2023-10-29T15:35:18.000Z | 2023-10-29T15:35:18 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jin05102518/Raw_v2 | jin05102518 | 2023-10-30T14:41:38Z | 48 | 0 | null | [
"region:us"
] | 2023-10-30T14:41:38Z | 2023-10-30T13:35:23.000Z | 2023-10-30T13:35:23 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Santp98/query_generated-title-secop2 | Santp98 | 2023-11-04T01:45:40Z | 48 | 0 | null | [
"region:us"
] | 2023-11-04T01:45:40Z | 2023-10-31T03:01:42.000Z | 2023-10-31T03:01:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: process_id
dtype: string
- name: description
dtype: string
- name: generated_queries
dtype: string
- name: qty_of_bidders
dtype: int64
- name: bids_qty_by_bidders
dtype: int64
- name: publication_date
dtype: timestamp[ns]
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 37310887
num_examples: 127284
- name: test
num_bytes: 34600458
num_examples: 110263
download_size: 18543871
dataset_size: 71911345
---
# Dataset Card for "query_generated-title-secop2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2240155041217804,
-0.20511974394321442,
0.29332834482192993,
0.10030195862054825,
-0.36762964725494385,
-0.06909757107496262,
0.20619332790374756,
-0.16671113669872284,
0.7163079977035522,
0.7444366216659546,
-0.8679386377334595,
-0.6961352229118347,
-0.4185943305492401,
-0.041768014430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
peuvini/llama2Pesquisa | peuvini | 2023-10-31T17:25:00Z | 48 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-31T17:25:00Z | 2023-10-31T16:57:43.000Z | 2023-10-31T16:57:43 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gear42/Nuscenes-QA-merge-front-image | gear42 | 2023-11-09T03:55:39Z | 48 | 1 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-11-09T03:55:39Z | 2023-11-01T06:50:44.000Z | 2023-11-01T06:50:44 | ---
task_categories:
- conversational
language:
- en
size_categories:
- 10K<n<100K
---
USAGE in Python
# load train and valid dataset
```
```
# add base_folder
```
```
| [
-0.49260807037353516,
-0.061830371618270874,
-0.4152662754058838,
0.6002218127250671,
-0.38535836338996887,
-0.280303955078125,
0.14693690836429596,
0.3634023368358612,
-0.42206355929374695,
0.4009559154510498,
-0.2669423222541809,
-0.2570243775844574,
-0.34276676177978516,
0.3060215115547... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HiTZ/Multilingual-Opinion-Target-Extraction | HiTZ | 2023-11-22T13:32:07Z | 48 | 0 | aspect-based-sentiment-analysis | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:en",
"language:fr",
"language:es",
"language:ru",
"language:tr",
"license:apache-2.0",
"opinion",
"target",
"absa",
"aspect",
"sentiment analysis",
"arxiv:2210.12623",
"arxiv:2212.10548",
"region:us"
] | 2023-11-22T13:32:07Z | 2023-11-02T17:41:40.000Z | 2023-11-02T17:41:40 | ---
arxiv: 2210.12623
paperswithcode_id: aspect-based-sentiment-analysis
license: apache-2.0
configs:
- config_name: en
data_files:
- split: train
path: en.ote.train.json
- split: test
path: en.ote.test.json
- config_name: es
data_files:
- split: train
path: es.ote.train.json
- split: test
path: es.ote.test.json
- config_name: fr
data_files:
- split: train
path: fr.ote.train.json
- split: test
path: fr.ote.test.json
- config_name: ru
data_files:
- split: train
path: ru.ote.train.json
- split: test
path: ru.ote.test.json
- config_name: tr
data_files:
- split: train
path: tr.ote.train.json
task_categories:
- token-classification
language:
- en
- fr
- es
- ru
- tr
tags:
- opinion
- target
- absa
- aspect
- sentiment analysis
pretty_name: Multilingual Opinion Target Extraction
size_categories:
- 1K<n<10K
---
This repository contains the English '[SemEval-2014 Task 4: Aspect Based Sentiment Analysis](https://aclanthology.org/S14-2004/)'. translated with DeepL into Spanish, French, Russian, and Turkish. The **labels have been manually projected**. For more details, read this paper: [Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings](https://arxiv.org/abs/2210.12623).
**Intended Usage**: Since the datasets are parallel across languages, they are ideal for evaluating annotation projection algorithms, such as [T-Projection](https://arxiv.org/abs/2212.10548).
# Label Dictionary
```python
{
"O": 0,
"B-TARGET": 1,
"I-TARGET": 2
}
```
# Cication
If you use this data, please cite the following papers:
```bibtex
@inproceedings{garcia-ferrero-etal-2022-model,
title = "Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings",
author = "Garc{\'\i}a-Ferrero, Iker and
Agerri, Rodrigo and
Rigau, German",
editor = "Goldberg, Yoav and
Kozareva, Zornitsa and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.478",
doi = "10.18653/v1/2022.findings-emnlp.478",
pages = "6403--6416",
abstract = "Zero-resource cross-lingual transfer approaches aim to apply supervised modelsfrom a source language to unlabelled target languages. In this paper we performan in-depth study of the two main techniques employed so far for cross-lingualzero-resource sequence labelling, based either on data or model transfer. Although previous research has proposed translation and annotation projection(data-based cross-lingual transfer) as an effective technique for cross-lingualsequence labelling, in this paper we experimentally demonstrate that highcapacity multilingual language models applied in a zero-shot (model-basedcross-lingual transfer) setting consistently outperform data-basedcross-lingual transfer approaches. A detailed analysis of our results suggeststhat this might be due to important differences in language use. Morespecifically, machine translation often generates a textual signal which isdifferent to what the models are exposed to when using gold standard data,which affects both the fine-tuning and evaluation processes. Our results alsoindicate that data-based cross-lingual transfer approaches remain a competitiveoption when high-capacity multilingual language models are not available.",
}
@inproceedings{pontiki-etal-2014-semeval,
title = "{S}em{E}val-2014 Task 4: Aspect Based Sentiment Analysis",
author = "Pontiki, Maria and
Galanis, Dimitris and
Pavlopoulos, John and
Papageorgiou, Harris and
Androutsopoulos, Ion and
Manandhar, Suresh",
editor = "Nakov, Preslav and
Zesch, Torsten",
booktitle = "Proceedings of the 8th International Workshop on Semantic Evaluation ({S}em{E}val 2014)",
month = aug,
year = "2014",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S14-2004",
doi = "10.3115/v1/S14-2004",
pages = "27--35",
}
``` | [
-0.47163915634155273,
-0.5693202614784241,
0.3320516347885132,
0.4131990969181061,
-0.15937264263629913,
-0.12234234809875488,
-0.27891433238983154,
-0.6482828259468079,
0.2645764946937561,
0.6878066658973694,
-0.3621543347835541,
-0.7348032593727112,
-0.522977888584137,
0.3596589863300323... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.