id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
Francesco/thermal-dogs-and-people-x6ejw
2023-03-30T09:19:15.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
null
0
187
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': thermal-dogs-n-people '1': dog '2': person annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: thermal-dogs-and-people-x6ejw tags: - rf100 --- # Dataset Card for thermal-dogs-and-people-x6ejw ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary thermal-dogs-and-people-x6ejw ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw ### Citation Information ``` @misc{ thermal-dogs-and-people-x6ejw, title = { thermal dogs and people x6ejw Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw } }, url = { https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
magicgh/alpaca-cleaned
2023-04-10T07:48:32.000Z
[ "license:cc-by-4.0", "region:us" ]
magicgh
null
null
null
1
187
--- license: cc-by-4.0 ---
tasksource/icl-symbol-tuning-instruct
2023-07-26T07:20:41.000Z
[ "task_categories:text2text-generation", "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "in-context-learning", "symbol-tuning", "icl", "meta-icl", "meta-learning", "flan", "long-input", "instruction-tuning", "instruct", "metaicl", "arxiv:2110.15943", "arxiv:2305.08298", "arxiv:2301.05948", "region:us" ]
tasksource
null
null
null
10
187
--- license: apache-2.0 task_categories: - text2text-generation - text-classification - text-generation language: - en tags: - in-context-learning - symbol-tuning - icl - meta-icl - meta-learning - flan - long-input - instruction-tuning - instruct - metaicl dataset_info: features: - name: task dtype: string - name: inputs dtype: string - name: targets dtype: string - name: symbols sequence: string splits: - name: validation num_bytes: 42218685.0 num_examples: 14970 - name: test num_bytes: 43453364.0 num_examples: 16204 - name: train num_bytes: 1303015298.0 num_examples: 452367 download_size: 727062369 dataset_size: 1388687347.0 size_categories: - 100K<n<1M --- # Description Few-shot prompting demonstrates that language models can learn in context even though they were not trained to do. However, explicitly learning to learn in context [meta-icl](https://arxiv.org/abs/2110.15943) leads to better results. With symbol tuning, labels are replaced with arbitrary symbols (e.g. foo/bar), which makes learning in context a key condition to learn the instructions We implement *symbol tuning*, as presented in the [Symbol tuning improves in-context learning](https://arxiv.org/pdf/2305.08298.pdf) paper with tasksource classification datasets. An input is a shuffled sequence of 4 positive and 4 negative examples showing a particular label (replaced with a symbol - a random word), followed by an example to label. This is the largest symbol-tuning dataset to date, with 279 datasets. Symbol tuning improves in-context learning, which tends to be degraded by instruction tuning. # Usage We limit input size to 50_000 characters. This is well enough to challenge long range modeling. But be careful to remove examples that are too long or to truncate from left, otherwise some examples might be unsolvable, as the "question" are at the end of the examples. ```python dataset = load_dataset('tasksource/icl-symbol-tuning-instruct') # assuming 4 characters per token and 1000 tokens dataset = dataset.filter(lambda x:len(x['inputs'])<1000*4) ``` ## References: Code: https://github.com/sileod/tasksource ``` @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } @article{wei2023symbol, title={Symbol tuning improves in-context learning in language models}, author={Wei, Jerry and Hou, Le and Lampinen, Andrew and Chen, Xiangning and Huang, Da and Tay, Yi and Chen, Xinyun and Lu, Yifeng and Zhou, Denny and Ma, Tengyu and others}, journal={arXiv preprint arXiv:2305.08298}, year={2023} } ```
yzhuang/autotree_automl_10000_credit_sgosdt_l256_dim10_d3_sd0
2023-09-07T02:24:10.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 126879367 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_credit_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_pmlb_10000_phoneme_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:06:07.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 185240000 num_examples: 10000 - name: validation num_bytes: 185240000 num_examples: 10000 download_size: 68514231 dataset_size: 370480000 --- # Dataset Card for "autotree_pmlb_10000_phoneme_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_MagicTelescope_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:48:36.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 186721409 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_MagicTelescope_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_MiniBooNE_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:03:38.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 293033260 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_MiniBooNE_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_jannis_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:07:04.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 292435235 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_jannis_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_pmlb_10000_twonorm_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:47:08.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 144253019 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_twonorm_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_heloc_sgosdt_l256_dim10_d3_sd0
2023-09-07T07:33:56.000Z
[ "region:us" ]
yzhuang
null
null
null
0
187
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 80234603 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_heloc_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pavlichenko/WizardLM_evol_instruct_70k_train_val_split
2023-09-16T12:26:29.000Z
[ "task_categories:conversational", "size_categories:10K<n<100K", "region:us" ]
pavlichenko
WizardLM dataset splitted in train and validation.
null
null
0
187
--- task_categories: - conversational size_categories: - 10K<n<100K ---
maritaca-ai/imdb_pt
2023-04-01T16:15:34.000Z
[ "region:us" ]
maritaca-ai
Large Movie Review Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.\
@InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} }
null
2
186
Entry not found
nanyy1025/covid_fake_news
2023-02-24T01:36:24.000Z
[ "task_categories:text-classification", "task_categories:zero-shot-classification", "language:en", "arxiv:2011.03327", "region:us" ]
nanyy1025
null
null
null
2
186
--- task_categories: - text-classification - zero-shot-classification language: - en --- Constraint@AAAI2021 - COVID19 Fake News Detection in English ``` @misc{patwa2020fighting, title={Fighting an Infodemic: COVID-19 Fake News Dataset}, author={Parth Patwa and Shivam Sharma and Srinivas PYKL and Vineeth Guptha and Gitanjali Kumari and Md Shad Akhtar and Asif Ekbal and Amitava Das and Tanmoy Chakraborty}, year={2020}, eprint={2011.03327}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Chinese-Vicuna/guanaco_belle_merge_v1.0
2023-03-30T07:49:30.000Z
[ "language:zh", "language:en", "language:ja", "license:gpl-3.0", "region:us" ]
Chinese-Vicuna
null
null
null
79
186
--- license: gpl-3.0 language: - zh - en - ja --- Thanks for [Guanaco Dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) and [Belle Dataset](https://huggingface.co/datasets/BelleGroup/generated_train_0.5M_CN) This dataset was created by merging the above two datasets in a certain format so that they can be used for training our code [Chinese-Vicuna](https://github.com/Facico/Chinese-Vicuna)
pbaoo2705/processed_dataset_v2
2023-09-06T05:27:28.000Z
[ "region:us" ]
pbaoo2705
null
null
null
0
186
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: pubid dtype: int32 - name: question dtype: string - name: context dtype: string - name: long_answer dtype: string - name: final_decision dtype: string - name: text dtype: string splits: - name: train num_bytes: 9013737 num_examples: 5000 - name: test num_bytes: 1797886 num_examples: 1000 download_size: 6294228 dataset_size: 10811623 --- # Dataset Card for "processed_dataset_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_eye_movements_sgosdt_l256_dim10_d3_sd0
2023-09-07T03:32:07.000Z
[ "region:us" ]
yzhuang
null
null
null
0
186
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 155715478 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_eye_movements_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_10000_Higgs_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:25:57.000Z
[ "region:us" ]
yzhuang
null
null
null
0
186
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 288073393 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_Higgs_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceH4/surge_instruct_llama2
2023-09-17T02:54:46.000Z
[ "region:us" ]
HuggingFaceH4
null
null
null
0
186
--- dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: meta struct: - name: category dtype: string - name: source dtype: string - name: text dtype: string splits: - name: train num_bytes: 28526320 num_examples: 9500 - name: test num_bytes: 1530441 num_examples: 500 download_size: 18846908 dataset_size: 30056761 --- # Dataset Card for "surge_instruct_llama2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sem_eval_2014_task_1
2023-01-25T14:43:53.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-ImageFlickr and SemEval-2012 STS MSR-Video Descriptions", "language:en", "license:cc-by-4.0", "region:us" ]
null
The SemEval-2014 Task 1 focuses on Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Entailment. The task was designed to predict the degree of relatedness between two sentences and to detect the entailment relation holding between them.
@inproceedings{inproceedings, author = {Marelli, Marco and Bentivogli, Luisa and Baroni, Marco and Bernardi, Raffaella and Menini, Stefano and Zamparelli, Roberto}, year = {2014}, month = {08}, pages = {}, title = {SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment}, doi = {10.3115/v1/S14-2001} }
null
1
185
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-ImageFlickr and SemEval-2012 STS MSR-Video Descriptions task_categories: - text-classification task_ids: - text-scoring - natural-language-inference - semantic-similarity-scoring pretty_name: SemEval 2014 - Task 1 dataset_info: features: - name: sentence_pair_id dtype: int64 - name: premise dtype: string - name: hypothesis dtype: string - name: relatedness_score dtype: float32 - name: entailment_judgment dtype: class_label: names: '0': NEUTRAL '1': ENTAILMENT '2': CONTRADICTION splits: - name: train num_bytes: 540296 num_examples: 4500 - name: test num_bytes: 592320 num_examples: 4927 - name: validation num_bytes: 60981 num_examples: 500 download_size: 197230 dataset_size: 1193597 --- # Dataset Card for SemEval 2014 - Task 1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [SemEval-2014 Task 1](https://alt.qcri.org/semeval2014/task1/) - **Repository:** - **Paper:** [Aclweb](https://www.aclweb.org/anthology/S14-2001/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@ashmeet13](https://github.com/ashmeet13) for adding this dataset.
mozilla-foundation/common_voice_2_0
2023-07-29T15:59:58.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
mozilla-foundation
null
@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }
null
0
185
--- annotations_creators: - crowdsourced language_creators: - crowdsourced license: - cc0-1.0 multilinguality: - multilingual size_categories: br: - 10K<n<100K ca: - 10K<n<100K cnh: - 1K<n<10K cv: - 1K<n<10K cy: - 10K<n<100K de: - 100K<n<1M dv: - 1K<n<10K en: - 100K<n<1M eo: - 10K<n<100K es: - 10K<n<100K et: - 1K<n<10K eu: - 10K<n<100K fr: - 100K<n<1M ga-IE: - 1K<n<10K it: - 10K<n<100K kab: - 100K<n<1M ky: - 10K<n<100K mn: - 1K<n<10K nl: - 10K<n<100K ru: - 10K<n<100K rw: - 1K<n<10K sah: - 1K<n<10K sl: - 1K<n<10K sv-SE: - 1K<n<10K tr: - 1K<n<10K tt: - 10K<n<100K zh-CN: - 1K<n<10K zh-TW: - 10K<n<100K source_datasets: - extended|common_voice paperswithcode_id: common-voice pretty_name: Common Voice Corpus 2 language_bcp47: - br - ca - cnh - cv - cy - de - dv - en - eo - es - et - eu - fr - ga-IE - it - kab - ky - mn - nl - ru - rw - sah - sl - sv-SE - tr - tt - zh-CN - zh-TW extra_gated_prompt: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset. task_categories: - automatic-speech-recognition --- # Dataset Card for Common Voice Corpus 2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 2366 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 1872 validated hours in 28 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) ### Languages ``` Basque, Breton, Catalan, Chinese (China), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Irish, Italian, Kabyle, Kinyarwanda, Kyrgyz, Mongolian, Russian, Sakha, Slovenian, Spanish, Swedish, Tatar, Turkish, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_2_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
rubend18/ChatGPT-Jailbreak-Prompts
2023-08-24T18:24:29.000Z
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:zero-shot-classification", "task_categories:table-question-answering", "size_categories:n<1K", "language:en", "language:aa", "ChatGPT", "JailbreakPrompts", "LanguageModeling", "ArtificialIntelligence", "TextGeneration", "Dataset", "OpenAI", "Jailbreak", "Prompts", "region:us" ]
rubend18
null
null
null
26
185
--- task_categories: - question-answering - text-generation - fill-mask - zero-shot-classification - table-question-answering language: - en - aa tags: - ChatGPT - JailbreakPrompts - LanguageModeling - ArtificialIntelligence - TextGeneration - Dataset - OpenAI - Jailbreak - Prompts size_categories: - n<1K pretty_name: ChatGPT Jailbreak Prompts --- # Dataset Card for Dataset Name ## Name ChatGPT Jailbreak Prompts ## Dataset Description - **Autor:** Rubén Darío Jaramillo - **Email:** rubend18@hotmail.com - **WhatsApp:** +593 93 979 6676 ### Dataset Summary ChatGPT Jailbreak Prompts is a complete collection of jailbreak related prompts for ChatGPT. This dataset is intended to provide a valuable resource for understanding and generating text in the context of jailbreaking in ChatGPT. ### Languages [English]
allocine
2023-01-25T14:26:09.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:fr", "license:mit", "region:us" ]
null
Allocine Dataset: A Large-Scale French Movie Reviews Dataset. This is a dataset for binary sentiment classification, made of user reviews scraped from Allocine.fr. It contains 100k positive and 100k negative reviews divided into 3 balanced splits: train (160k reviews), val (20k) and test (20k).
@misc{blard2019allocine, author = {Blard, Theophile}, title = {french-sentiment-analysis-with-bert}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\\url{https://github.com/TheophileBlard/french-sentiment-analysis-with-bert}}, }
null
6
184
--- annotations_creators: - no-annotation language_creators: - found language: - fr license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: allocine pretty_name: Allociné dataset_info: features: - name: review dtype: string - name: label dtype: class_label: names: '0': neg '1': pos config_name: allocine splits: - name: train num_bytes: 91330696 num_examples: 160000 - name: validation num_bytes: 11546250 num_examples: 20000 - name: test num_bytes: 11547697 num_examples: 20000 download_size: 66625305 dataset_size: 114424643 train-eval-index: - config: allocine task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: review: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for Allociné ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [Allociné dataset repository](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/tree/master/allocine_dataset) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Théophile Blard](mailto:theophile.blard@gmail.com) ### Dataset Summary The Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the [Allociné.fr](https://www.allocine.fr/) community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k). ### Supported Tasks and Leaderboards - `text-classification`, `sentiment-classification`: The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, [tf-allociné](https://huggingface.co/tblard/tf-allocine), achieves 97.44% accuracy on the test set. ### Languages The text is in French, as spoken by users of the [Allociné.fr](https://www.allocine.fr/) website. The BCP-47 code for French is fr. ## Dataset Structure ### Data Instances Each data instance contains the following features: _review_ and _label_. In the Hugging Face distribution of the dataset, the _label_ has 2 possible values, _0_ and _1_, which correspond to _negative_ and _positive_ respectively. See the [Allociné corpus viewer](https://huggingface.co/datasets/viewer/?dataset=allocine) to explore more examples. An example from the Allociné train set looks like the following: ``` {'review': 'Premier film de la saga Kozure Okami, "Le Sabre de la vengeance" est un très bon film qui mêle drame et action, et qui, en 40 ans, n'a pas pris une ride.', 'label': 1} ``` ### Data Fields - 'review': a string containing the review text - 'label': an integer, either _0_ or _1_, indicating a _negative_ or _positive_ review, respectively ### Data Splits The Allociné dataset has 3 splits: _train_, _validation_, and _test_. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews. | Dataset Split | Number of Instances in Split | Percent Negative Reviews | Percent Positive Reviews | | ------------- | ---------------------------- | ------------------------ | ------------------------ | | Train | 160,000 | 49.6% | 50.4% | | Validation | 20,000 | 51.0% | 49.0% | | Test | 20,000 | 52.0% | 48.0% | ## Dataset Creation ### Curation Rationale The Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the [tf-allociné](https://huggingface.co/tblard/tf-allocine) model and used to compare the performance of several language models on this task. ### Source Data #### Initial Data Collection and Normalization The reviews and ratings were collected using a list of [film page urls](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/blob/master/allocine_dataset/allocine_films_urls.txt) and the [allocine_scraper.py](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/blob/master/allocine_dataset/allocine_scraper.py) tool. Up to 30 reviews were collected for each film. The reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset. #### Who are the source language producers? The dataset contains movie reviews produced by the online community of the [Allociné.fr](https://www.allocine.fr/) website. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Reviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot. ## Considerations for Using the Data ### Social Impact of Dataset Sentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity. It should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case. ### Discussion of Biases The Allociné website lists a number of topics which violate their [terms of service](https://www.allocine.fr/service/conditions.html#charte). Further analysis is needed to determine the extent to which moderators have successfully removed such content. ### Other Known Limitations The limitations of the Allociné dataset have not yet been investigated, however [Staliūnaitė and Bonfil (2017)](https://www.aclweb.org/anthology/W17-5410.pdf) detail linguistic phenomena that are generally present in sentiment analysis but difficult for models to accurately label, such as negation, adverbial modifiers, and reviewer pragmatics. ## Additional Information ### Dataset Curators The Allociné dataset was collected by Théophile Blard. ### Licensing Information The Allociné dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT). ### Citation Information > Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert> ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@TheophileBlard](https://github.com/TheophileBlard), [@lewtun](https://github.com/lewtun) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
squad_it
2023-04-05T13:40:37.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|squad", "language:it", "license:unknown", "region:us" ]
null
SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is split into training and test sets to support the replicability of the benchmarking of QA systems:
@InProceedings{10.1007/978-3-030-03840-3_29, author={Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto}, editor={Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo", title={Neural Learning for Question Answering in Italian}, booktitle={AI*IA 2018 -- Advances in Artificial Intelligence}, year={2018}, publisher={Springer International Publishing}, address={Cham}, pages={389--402}, isbn={978-3-030-03840-3} }
null
2
184
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - it language_bcp47: - it-IT license: - unknown multilinguality: - monolingual size_categories: - unknown source_datasets: - extended|squad task_categories: - question-answering task_ids: - open-domain-qa - extractive-qa paperswithcode_id: squad-it pretty_name: SQuAD-it dataset_info: features: - name: id dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 50864824 num_examples: 54159 - name: test num_bytes: 7858336 num_examples: 7609 download_size: 8776531 dataset_size: 58723160 --- # Dataset Card for "squad_it" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/crux82/squad-it](https://github.com/crux82/squad-it) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 8.78 MB - **Size of the generated dataset:** 58.79 MB - **Total amount of disk used:** 67.57 MB ### Dataset Summary SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is split into training and test sets to support the replicability of the benchmarking of QA systems: ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 8.78 MB - **Size of the generated dataset:** 58.79 MB - **Total amount of disk used:** 67.57 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": "{\"answer_start\": [243, 243, 243, 243, 243], \"text\": [\"evitare di essere presi di mira dal boicottaggio\", \"evitare di essere pres...", "context": "\"La crisi ha avuto un forte impatto sulle relazioni internazionali e ha creato una frattura all' interno della NATO. Alcune nazi...", "id": "5725b5a689a1e219009abd28", "question": "Perchè le nazioni europee e il Giappone si sono separati dagli Stati Uniti durante la crisi?" } ``` ### Data Fields The data fields are the same among all splits. #### default - `id`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | test | | ------- | ----: | ---: | | default | 54159 | 7609 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{10.1007/978-3-030-03840-3_29, author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto", editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo", title="Neural Learning for Question Answering in Italian", booktitle="AI*IA 2018 -- Advances in Artificial Intelligence", year="2018", publisher="Springer International Publishing", address="Cham", pages="389--402", isbn="978-3-030-03840-3" } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
keshan/wit-dataset
2021-08-07T18:15:42.000Z
[ "region:us" ]
keshan
\\nWikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
@article{srinivasan2021wit, title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning}, author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc}, journal={arXiv preprint arXiv:2103.01913}, year={2021} }
null
1
184
https://github.com/google-research-datasets/wit Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. ``` @article{srinivasan2021wit, title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning}, author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc}, journal={arXiv preprint arXiv:2103.01913}, year={2021} } ```
zzliang/GRIT
2023-07-04T06:40:28.000Z
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_categories:object-detection", "task_categories:zero-shot-classification", "task_ids:image-captioning", "task_ids:visual-question-answering", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:COYO-700M", "language:en", "license:ms-pl", "image-text-bounding-box pairs", "image-text pairs", "arxiv:2306.14824", "region:us" ]
zzliang
null
null
null
37
184
--- license: ms-pl language: - en multilinguality: - monolingual pretty_name: GRIT size_categories: - 100M<n<1B source_datasets: - COYO-700M tags: - image-text-bounding-box pairs - image-text pairs task_categories: - text-to-image - image-to-text - object-detection - zero-shot-classification task_ids: - image-captioning - visual-question-answering --- # GRIT: Large-Scale Training Corpus of Grounded Image-Text Pairs ### Dataset Description - **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2) - **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824) ### Dataset Summary We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824). ### Supported Tasks During the construction, we excluded the image-caption pairs if no bounding boxes are retained. This procedure resulted in a high-quality image-caption subset of COYO-700M, which we will validate in the future. Furthermore, this dataset contains text-span-bounding-box pairs. Thus, it can be used in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation, and open-world object detection. ### Data Instance One instance is ```python { 'key': '000373938', 'clip_similarity_vitb32': 0.353271484375, 'clip_similarity_vitl14': 0.2958984375, 'id': 1795296605919, 'url': "https://www.thestrapsaver.com/wp-content/uploads/customerservice-1.jpg", 'caption': 'a wire hanger with a paper cover that reads we heart our customers', 'width': 1024, 'height': 693, 'noun_chunks': [[19, 32, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 13, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]], 'ref_exps': [[19, 66, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 66, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]] } ``` - `key`: The generated file name when using img2dataset to download COYO-700M (omit it). - `clip_similarity_vitb32`: The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M. - `clip_similarity_vitl14`: The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M. - `id`: Unique 64-bit integer ID in COYO-700M. - `url`: The image URL. - `caption`: The corresponding caption. - `width`: The width of the image. - `height`: The height of the image. - `noun_chunks`: The noun chunks (extracted by [spaCy](https://spacy.io/)) that have associated bounding boxes (predicted by [GLIP](https://github.com/microsoft/GLIP)). The items in the children list respectively represent 'Start of the noun chunk in caption', 'End of the noun chunk in caption', 'normalized x_min', 'normalized y_min', 'normalized x_max', 'normalized y_max', 'confidence score'. - `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it. ### Download image We recommend to use [img2dataset](https://github.com/rom1504/img2dataset) tool to download the images. 1. Download the metadata. You can download it by cloning current repository: ```bash git lfs install git clone https://huggingface.co/datasets/zzliang/GRIT ``` 2. Install [img2dataset](https://github.com/rom1504/img2dataset). ```bash pip install img2dataset ``` 3. Download images You need to replace `/path/to/GRIT_dataset/grit-20m` with the local path to this repository. ```bash img2dataset --url_list /path/to/GRIT_dataset/grit-20m --input_format "parquet"\ --url_col "url" --caption_col "caption" --output_format webdataset \ --output_folder /tmp/grit --processes_count 4 --thread_count 64 --image_size 256 \ --resize_only_if_bigger=True --resize_mode="keep_ratio" --skip_reencode=True \ --save_additional_columns '["id","noun_chunks","ref_exps","clip_similarity_vitb32","clip_similarity_vitl14"]' \ --enable_wandb False ``` You can adjust some parameters according to your actual needs (e.g., `processes_count`, `thread_count`, `image_size`, `save_additional_columns`). More img2dataset hyper-parameters can be found in [here](https://github.com/rom1504/img2dataset#api). ### Citation Information If you apply this dataset to any project and research, please cite our paper and coyo-700m: ``` @article{Kosmos2, title={Kosmos-2: Grounding Multimodal Large Language Models to the World}, author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei}, journal={ArXiv}, year={2023}, volume={abs/2306.14824} } @misc{kakaobrain2022coyo-700m, title = {COYO-700M: Image-Text Pair Dataset}, author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim}, year = {2022}, howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}}, } ```
nampdn-ai/tiny-orca-textbooks
2023-09-28T02:15:06.000Z
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2309.05463", "arxiv:2305.07759", "region:us" ]
nampdn-ai
null
null
null
5
184
--- task_categories: - text-generation language: - en pretty_name: Tiny Orca Textbooks size_categories: - 100K<n<1M license: cc-by-nc-sa-4.0 --- # Textbook-like Dataset: A Comprehensive Resource for Text-Based Skills Development in Small Language Models This dataset is a collection of **147k synthetic textbooks** designed to enhance the text-based skills of small language models. The curriculum is meticulously structured to progress from simple to complex tasks, ensuring a gradual and effective learning experience during pretraining or finetuning SLMs. The inspiration for this dataset comes from the technical report paper, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463). The source texts incorporated in this dataset are derived from the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset, a well-known resource in the field. Emphasizing text-based skills, this dataset serves as a practical reasoning for small language models to learn and do exercise, providing them with a diverse range of skills to learn and adapt from. The step-by-step progression mirrors the structure of a textbook, making it an ideal in-context learning sample. ### Disclaimer While every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties. The use of the `textbook` field in this dataset is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it. ## Tiny Series Explore the possibilities and limitations of building Small Language Models with these tiny gems of data! - [TinyStories](https://arxiv.org/abs/2305.07759): The paper that sparked my interest in the journey of the tiny-* series. - [tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes): Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason. - [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks): 420k "things of internet" synthetic textbooks. - [tiny-webtext](https://huggingface.co/datasets/nampdn-ai/tiny-webtext): A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset. - [tiny-lessons](https://huggingface.co/datasets/nampdn-ai/tiny-lessons): Subset of *tiny-textbooks* dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format. - [tiny-bridgedict](https://huggingface.co/datasets/nampdn-ai/tiny-bridgedict): A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models. ### Others small HQ datasets with textbook-like quality - [devdocs.io](https://huggingface.co/datasets/nampdn-ai/devdocs.io): FreeCodeCamp has provided 189k comprehensive API documentation across a wide range of tech stacks and programming languages. - [sciphi-python-textbook](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook) - [textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming) - [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
yzhuang/autotree_pmlb_10000_magic_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:44:01.000Z
[ "region:us" ]
yzhuang
null
null
null
0
184
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 188904676 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_magic_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_v2_1000_0.50_id
2023-09-12T16:42:18.000Z
[ "region:us" ]
tyzhu
null
null
null
0
184
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: question dtype: string - name: context dtype: string - name: answers struct: - name: answer_start sequence: int64 - name: text sequence: string - name: id dtype: string splits: - name: train num_bytes: 97308726.73032843 num_examples: 55568 - name: validation num_bytes: 1917601 num_examples: 1000 download_size: 4274826 dataset_size: 99226327.73032843 --- # Dataset Card for "squad_v2_1000_0.50_id" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/e8491cc1
2023-10-03T01:18:56.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
184
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 168 num_examples: 10 download_size: 1314 dataset_size: 168 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "e8491cc1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jjonhwa/dolly-ko
2023-10-08T09:55:19.000Z
[ "region:us" ]
jjonhwa
null
null
null
0
184
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 14238792 num_examples: 15011 download_size: 8006189 dataset_size: 14238792 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "dolly-ko" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
definite_pronoun_resolution
2023-04-05T10:04:44.000Z
[ "task_categories:token-classification", "task_ids:word-sense-disambiguation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
null
Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines: the first line contains the sentence, the second line contains the target pronoun, the third line contains the two candidate antecedents, and the fourth line contains the correct antecedent. If the target pronoun appears more than once in the sentence, its first occurrence is the one to be resolved.
@inproceedings{rahman2012resolving, title={Resolving complex cases of definite pronouns: the winograd schema challenge}, author={Rahman, Altaf and Ng, Vincent}, booktitle={Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning}, pages={777--789}, year={2012}, organization={Association for Computational Linguistics} }
null
3
183
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - word-sense-disambiguation paperswithcode_id: definite-pronoun-resolution-dataset pretty_name: Definite Pronoun Resolution Dataset dataset_info: features: - name: sentence dtype: string - name: pronoun dtype: string - name: candidates sequence: string length: 2 - name: label dtype: class_label: names: '0': '0' '1': '1' config_name: plain_text splits: - name: test num_bytes: 71691 num_examples: 564 - name: train num_bytes: 171511 num_examples: 1322 download_size: 227452 dataset_size: 243202 --- # Dataset Card for "definite_pronoun_resolution" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.hlt.utdallas.edu/~vince/data/emnlp12/](https://www.hlt.utdallas.edu/~vince/data/emnlp12/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.23 MB - **Size of the generated dataset:** 0.24 MB - **Total amount of disk used:** 0.47 MB ### Dataset Summary Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines: the first line contains the sentence, the second line contains the target pronoun, the third line contains the two candidate antecedents, and the fourth line contains the correct antecedent. If the target pronoun appears more than once in the sentence, its first occurrence is the one to be resolved. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 0.23 MB - **Size of the generated dataset:** 0.24 MB - **Total amount of disk used:** 0.47 MB An example of 'train' looks as follows. ``` { "candidates": ["coreference resolution", "chunking"], "label": 0, "pronoun": "it", "sentence": "There is currently more work on coreference resolution than on chunking because it is a problem that is still far from being solved." } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `sentence`: a `string` feature. - `pronoun`: a `string` feature. - `candidates`: a `list` of `string` features. - `label`: a classification label, with possible values including `0` (0), `1` (1). ### Data Splits | name |train|test| |----------|----:|---:| |plain_text| 1322| 564| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{rahman2012resolving, title={Resolving complex cases of definite pronouns: the winograd schema challenge}, author={Rahman, Altaf and Ng, Vincent}, booktitle={Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning}, pages={777--789}, year={2012}, organization={Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
SkelterLabsInc/JaQuAD
2022-10-25T09:06:40.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ja", "license:cc-by-sa-3.0", "arxiv:2202.01764", "region:us" ]
SkelterLabsInc
null
null
null
5
183
--- annotations_creators: - crowdsourced language_creators: - crowdsourced - found language: - ja license: - cc-by-sa-3.0 multilinguality: - monolingual paperswithcode_id: null pretty_name: "JaQuAD: Japanese Question Answering Dataset" size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for JaQuAD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splitting](#data-splitting) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Dataset Description - **Repository:** https://github.com/SkelterLabsInc/JaQuAD - **Paper:** [JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension]() - **Point of Contact:** [jaquad@skelterlabs.com](jaquad@skelterlabs.com) - **Size of dataset files:** 24.6 MB - **Size of the generated dataset:** 48.6 MB - **Total amount of disk used:** 73.2 MB ### Dataset Summary Japanese Question Answering Dataset (JaQuAD), released in 2022, is a human-annotated dataset created for Japanese Machine Reading Comprehension. JaQuAD is developed to provide a SQuAD-like QA dataset in Japanese. JaQuAD contains 39,696 question-answer pairs. Questions and answers are manually curated by human annotators. Contexts are collected from Japanese Wikipedia articles. Fine-tuning [BERT-Japanese](https://huggingface.co/cl-tohoku/bert-base-japanese) on JaQuAD achieves 78.92% for an F1 score and 63.38% for an exact match. ### Supported Tasks - `extractive-qa`: This dataset is intended to be used for `extractive-qa`. ### Languages Japanese (`ja`) ## Dataset Structure ### Data Instances - **Size of dataset files:** 24.6 MB - **Size of the generated dataset:** 48.6 MB - **Total amount of disk used:** 73.2 MB An example of 'validation': ```python { "id": "de-001-00-000", "title": "イタセンパラ", "context": "イタセンパラ(板鮮腹、Acheilognathuslongipinnis)は、コイ科のタナゴ亜科タナゴ属に分類される淡水>魚の一種。\n別名はビワタナゴ(琵琶鱮、琵琶鰱)。", "question": "ビワタナゴの正式名称は何?", "question_type": "Multiple sentence reasoning", "answers": { "text": "イタセンパラ", "answer_start": 0, "answer_type": "Object", }, }, ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `question_type`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. - `answer_type`: a `string` feature. ### Data Splitting JaQuAD consists of three sets, `train`, `validation`, and `test`. They were created from disjoint sets of Wikipedia articles. The `test` set is not publicly released yet. The following table shows statistics for each set. Set | Number of Articles | Number of Contexts | Number of Questions --------------|--------------------|--------------------|-------------------- Train | 691 | 9713 | 31748 Validation | 101 | 1431 | 3939 Test | 109 | 1479 | 4009 ## Dataset Creation ### Curation Rationale The JaQuAD dataset was created by [Skelter Labs](https://skelterlabs.com/) to provide a SQuAD-like QA dataset in Japanese. Questions are original and based on Japanese Wikipedia articles. ### Source Data The articles used for the contexts are from [Japanese Wikipedia](https://ja.wikipedia.org/). 88.7% of articles are from the curated list of Japanese high-quality Wikipedia articles, e.g., [featured articles](https://ja.wikipedia.org/wiki/Wikipedia:%E8%89%AF%E8%B3%AA%E3%81%AA%E8%A8%98%E4%BA%8B) and [good articles](https://ja.wikipedia.org/wiki/Wikipedia:%E7%A7%80%E9%80%B8%E3%81%AA%E8%A8%98%E4%BA%8B). ### Annotations Wikipedia articles were scrapped and divided into one more multiple paragraphs as contexts. Annotations (questions and answer spans) are written by fluent Japanese speakers, including natives and non-natives. Annotators were given a context and asked to generate non-trivial questions about information in the context. ### Personal and Sensitive Information No personal or sensitive information is included in this dataset. Dataset annotators has been manually verified it. ## Considerations for Using the Data Users should consider that the articles are sampled from Wikipedia articles but not representative of all Wikipedia articles. ### Social Impact of Dataset The social biases of this dataset have not yet been investigated. ### Discussion of Biases The social biases of this dataset have not yet been investigated. Articles and questions have been selected for quality and diversity. ### Other Known Limitations The JaQuAD dataset has limitations as follows: - Most of them are short answers. - Assume that a question is answerable using the corresponding context. This dataset is incomplete yet. If you find any errors in JaQuAD, please contact us. ## Additional Information ### Dataset Curators Skelter Labs: [https://skelterlabs.com/](https://skelterlabs.com/) ### Licensing Information The JaQuAD dataset is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ```bibtex @misc{so2022jaquad, title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}}, author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho}, year={2022}, eprint={2202.01764}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Acknowledgements This work was supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/). For training models, we used cloud TPUs provided by TRC. We also thank annotators who generated JaQuAD.
yxchar/citation_intent-tlm
2021-11-04T23:47:24.000Z
[ "region:us" ]
yxchar
null
null
null
2
183
Entry not found
lighteval/mutual_harness
2023-08-09T15:50:01.000Z
[ "region:us" ]
lighteval
MuTual is a retrieval-based dataset for multi-turn dialogue reasoning, which is modified from Chinese high school English listening comprehension test data.
@inproceedings{mutual, title = "MuTual: A Dataset for Multi-Turn Dialogue Reasoning", author = "Cui, Leyang and Wu, Yu and Liu, Shujie and Zhang, Yue and Zhou, Ming" , booktitle = "Proceedings of the 58th Conference of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", }
null
2
183
Entry not found
yzhuang/autotree_automl_10000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:10:11.000Z
[ "region:us" ]
yzhuang
null
null
null
0
183
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 122258450 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
emrgnt-cmplxty/sciphi-python-textbook
2023-10-09T15:49:57.000Z
[ "license:llama2", "arxiv:2306.11644", "region:us" ]
emrgnt-cmplxty
null
null
null
30
183
--- license: llama2 configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: formatted_prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 1622068845 num_examples: 555072 download_size: 753726617 dataset_size: 1622068845 --- # Reasoning with Language and Code: A SciPhi Collection ## Dataset Description This dataset, currently **300k fully open source samples, was generated using the [SciPhi repository](https://github.com/emrgnt-cmplxty/SciPhi)**. The primary objective of this repository is to emphasize the significance of textbook-like high educational value using code snippets. Each code snippet has been meticulously crafted to ensure maximum readability and comprehension. This collection focuses exclusively on Python, in line with the findings from the original paper. Work is ongoing to extend this dataset and introduce new related datasets. ### Source This dataset was inspired and generated based on the paper "Textbooks Are All You Need" and related open source datasets, e.g. `nampdn-ai/tiny-codes`. These studies demonstrated that LLM models can achieve state-of-the-art results on code-related tasks when trained on high-quality data that mirrors textbooks and exercises. This repository seeks to provide such data for data analysts and ML engineers eager to deepen their understanding of how small LLM models can learn to reason with code. ### Use Cases Researchers and developers can utilize this dataset to: - Train and evaluate LLM models on reasoning tasks with code. - Reproduce the dataset using other LLM models and compare outcomes. - Forge new prompts from related properties for varied experiments. ### Limitations Please be aware that this dataset is not designed with production tasks in mind. Further, post-processing has not been extensively carried out and is recommended. ### Questions? Join us on [Discord here](https://discord.gg/j9GxfbxqAe) or [contact me](mailto:owe@emergentagi.com) directly. ## Dataset Structure ### Features The dataset includes the following features: - `prompt_template`: (string) The template used for the prompt. - `course_type`: (string) Type of course associated with the data. - `programming_paradigm`: (string) Programming paradigm related to the code. - `concept`: (string) The main concept or topic of the snippet. - `depth`: (string) Depth or complexity level of the snippet. - `additional_context`: (string) Any additional context or information. - `model`: (string) Model details or type. - `batch_size`: (int64) Batch size used during generation. - `temperature`: (int64) Temperature setting during generation. - `top_p`: (float64) Top-p setting during generation. - `top_k`: (int64) Top-k setting during generation. - `max_tokens`: (int64) Maximum tokens setting during generation. - `generated_text`: (string) The generated text or code snippet. ### Splits - `train`: 294,144 examples. ## Citation For the original inspiration and methodology behind this dataset, please cite: ``` @misc{gunasekar2023textbooks, title={Textbooks Are All You Need}, author={Suriya Gunasekar and Yi Zhang and Jyoti Aneja and Caio César Teodoro Mendes and Allie Del Giorno and Sivakanth Gopi and Mojan Javaheripi and Piero Kauffmann and Gustavo de Rosa and Olli Saarikivi and Adil Salim and Shital Shah and Harkirat Singh Behl and Xin Wang and Sébastien Bubeck and Ronen Eldan and Adam Tauman Kalai and Yin Tat Lee and Yuanzhi Li}, year={2023}, eprint={2306.11644}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
PORTULAN/glue-ptpt
2023-05-12T12:49:02.000Z
[ "language_creators:machine-generated", "size_categories:10K<n<100K", "source_datasets:glue", "language:pt", "arxiv:2305.06721", "region:us" ]
PORTULAN
GLUE-PTPT is an European Portuguese translation of the GLUE benchmark using DeepL Pro.
@misc{Gomes2023, author = {Luís Gomes and João Rodrigues and João Silva and António Branco and Rodrigo Santos}, title = {GLUE-PTPT -- The General Language Understanding Evaluation benchmark translated to European Portuguese}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face dataset}, howpublished = {\\url{https://huggingface.co/datasets/PORTULAN/glue-ptpt}}, }
null
3
182
--- language: - pt language_creators: - machine-generated source_datasets: - glue pretty_name: GLUE-PTPT -- The General Language Understanding Evaluation benchmark translated to European Portuguese size_categories: - 10K<n<100K --- # GLUE-PTPT -- The General Language Understanding Evaluation benchmark translated to European Portuguese This dataset has been created to evaluate [Albertina PT-* models](https://huggingface.co/PORTULAN/albertina-ptpt). If you use this dataset please cite: @misc{rodrigues2023advancing, title={Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*}, author={João Rodrigues and Luís Gomes and João Silva and António Branco and Rodrigo Santos and Henrique Lopes Cardoso and Tomás Osório}, year={2023}, eprint={2305.06721}, archivePrefix={arXiv}, primaryClass={cs.CL} } Thus far, only 4 tasks have been translated to European Portuguese: - MRPC - RTE - STS-B - WNLI The remainder tasks will be added in the future. See [gluebenchmark.com](https://gluebenchmark.com/) for information about the General Language Understanding Evaluation (GLUE) dataset.
yzhuang/autotree_pmlb_10000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:14:27.000Z
[ "region:us" ]
yzhuang
null
null
null
0
182
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 172085873 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_pmlb_10000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:25:19.000Z
[ "region:us" ]
yzhuang
null
null
null
0
182
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 179483399 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/78fe0016
2023-10-03T01:21:52.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
182
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 173 num_examples: 10 download_size: 1317 dataset_size: 173 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "78fe0016" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/quac_expanded
2021-10-04T19:41:30.000Z
[ "region:us" ]
Zaid
\\nQuestion Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.
\\n@inproceedings{choi-etal-2018-quac, title = "QUAC: Question answering in context", abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at http://quac.ai.", author = "Eunsol Choi and He He and Mohit Iyyer and Mark Yatskar and Yih, {Wen Tau} and Yejin Choi and Percy Liang and Luke Zettlemoyer", year = "2018", language = "English (US)", series = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018", publisher = "Association for Computational Linguistics", pages = "2174--2184", editor = "Ellen Riloff and David Chiang and Julia Hockenmaier and Jun'ichi Tsujii", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018", note = "2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 ; Conference date: 31-10-2018 Through 04-11-2018", }
null
0
181
Entry not found
Bingsu/Human_Action_Recognition
2022-07-05T02:48:56.000Z
[ "task_categories:image-classification", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:odbl", "region:us" ]
Bingsu
null
null
null
7
181
--- language: - en license: - odbl pretty_name: Human Action Recognition size_categories: - 10K<n<100K source_datasets: - original task_categories: - image-classification --- ## Dataset Description - **Homepage:** [Human Action Recognition (HAR) Dataset](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset) - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A ## Dataset Summary A dataset from [kaggle](https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data ### Introduction - The dataset features 15 different classes of Human Activities. - The dataset contains about 12k+ labelled images including the validation images. - Each image has only one human activity category and are saved in separate folders of the labelled classes ### PROBLEM STATEMENT - Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. - Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. - Your Task is to build an Image Classification Model using CNN that classifies to which class of activity a human is performing. ### About Files - Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities. - Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’. - Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file. - sample_submission: This is a csv file that contains the sample submission for the data sprint. ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. All `test` data is labeled 0. ### Class Label Mappings: ``` { 'calling': 0, 'clapping': 1, 'cycling': 2, 'dancing': 3, 'drinking': 4, 'eating': 5, 'fighting': 6, 'hugging': 7, 'laughing': 8, 'listening_to_music': 9, 'running': 10, 'sitting': 11, 'sleeping': 12, 'texting': 13, 'using_laptop': 14 } ``` ### Data Splits | | train | test | |---------------|--------|-----:| | # of examples | 12600 | 5400 | ### Data Size - download: 311.96 MiB - generated: 312.59 MiB - total: 624.55 MiB ```pycon >>> from datasets import load_dataset >>> ds = load_dataset("Bingsu/Human_Action_Recognition") >>> ds DatasetDict({ test: Dataset({ features: ['image', 'labels'], num_rows: 5400 }) train: Dataset({ features: ['image', 'labels'], num_rows: 12600 }) }) >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=15, names=['calling', 'clapping', 'cycling', 'dancing', 'drinking', 'eating', 'fighting', 'hugging', 'laughing', 'listening_to_music', 'running', 'sitting', 'sleeping', 'texting', 'using_laptop'], id=None)} >>> ds["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=240x160>, 'labels': 11} ```
PKU-Alignment/BeaverTails
2023-07-20T15:33:08.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-4.0", "safe", "safety", "ai-safety", "moderation", "rejection-sampling", "llm", "lm", "human-feedback", "arxiv:2307.04657", "region:us" ]
PKU-Alignment
null
null
null
11
181
--- license: cc-by-nc-4.0 task_categories: - text-classification language: - en tags: - safe - safety - ai-safety - moderation - rejection-sampling - llm - lm - human-feedback size_categories: - 100K<n<1M --- # Dataset Card for BeaverTails BeaverTails is an AI safety-focused collection comprising a series of datasets. This repository includes human-labeled data consisting of question-answer (QA) pairs, each identified with their corresponding harm categories. It should be noted that a single QA pair can be associated with more than one category. ![dataset-distribution](images/dataset-distribution.png) The 14 harm categories are defined as follows: 1. **Animal Abuse:** This involves any form of cruelty or harm inflicted on animals, including physical abuse, neglect, or any behavior that causes distress or harm to an animal. It also includes content that promotes such behavior. 1. **Child Abuse:** This encompasses any form of physical, emotional, or sexual abuse directed toward children. It can also include neglect, exploitation, or any behavior that harms a child or violates their rights. Content that promotes or glorifies such behavior also falls under this category. 1. **Controversial Topics, Politics:** This refers to discussions or statements related to controversial topics, political ideologies, or sensitive issues that could lead to disagreement or conflict. These topics can be divisive and could potentially cause harm if not handled respectfully and thoughtfully. 1. **Discrimination, Stereotype, Injustice:** Discrimination refers to treating people unfairly or unequally based on their personal characteristics or affiliations such as race, age, gender, religion, etc. Stereotyping involves holding simplified and generalized beliefs about particular groups, which often leads to prejudices and discrimination. Injustice pertains to unfair treatment or violation of the rights of individuals or groups. 1. **Drug Abuse, Weapons, Banned Substance:** This category includes the misuse or abuse of illegal drugs, prescription drugs, or any other banned substances. It also involves the unauthorized or illegal possession, use, or trade of weapons. 1. **Financial Crime, Property Crime, Theft:** This includes all forms of illegal activities that lead to financial loss such as fraud, embezzlement, money laundering, bribery, and other forms of corruption. Property crime and theft involve the act of taking someone's property without their consent. 1. **Hate Speech, Offensive Language:** This category encompasses any form of communication or content that expresses intense hostility or violence towards an individual or group based on aspects like race, religion, ethnicity, nationality, gender, sexual orientation, or any other defining characteristics. This also includes language that is considered vulgar, crude, or offensive. 1. **Misinformation Regarding ethics, laws, and safety:** This category involves the spreading of incorrect or misleading information related to ethical issues, laws, regulations, or any matter related to safety. Misinformation can cause harm by influencing people's beliefs or actions negatively. 1. **Non-Violent Unethical Behavior:** This refers to any behavior that is morally or ethically wrong, but does not involve violence. Examples can include dishonest behaviors like lying, cheating, deceiving, spreading rumors, betraying trust, or any other form of unethical conduct. 1. **Privacy Violation:** This refers to actions that invade an individual's privacy, such as unauthorized access, use, disclosure, or collection of personal data. It also includes stalking, doxing, or any other form of intrusion into someone's personal life without their consent. 1. **Self-Harm:** This includes any form of self-inflicted harm, whether physical or emotional, or any content that promotes, glorifies, or encourages self-harm. This can also include discussions of suicidal thoughts or plans. 1. **Sexually Explicit, Adult Content:** This category includes any content that depicts explicit sexual activities or adult themes that are considered inappropriate or offensive. This can include pornography, explicit sexual text or messages, and any other form of sexually explicit material. 1. **Terrorism, Organized Crime:** This pertains to any form of content or action related to terrorism or organized crime, including endorsing or promoting terrorist activities, participating in organized criminal activities, or spreading propaganda for such groups. 1. **Violence, Aiding and Abetting, Incitement:** This involves any form of physical harm, threat, or violent behavior towards individuals or groups. Aiding and abetting refers to the act of helping, supporting, or encouraging such violent behaviors or illegal activities. Incitement pertains to the act of provoking or stirring up harmful, violent, or illegal actions. **Disclaimer**: The BeaverTails dataset and its family contain content that may be offensive or upsetting. Topics covered in the dataset include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially distressing subject matter. Please engage with the dataset responsibly and in accordance with your own personal risk tolerance. The dataset is intended for research purposes, specifically for research aimed at creating safer and less harmful AI systems. The views and opinions expressed in the dataset do not represent the views of the PKU-Alignment Team or any of its members. It is important to emphasize that the dataset should not be used for training dialogue agents, as doing so may likely result in harmful model behavior. The primary objective of this dataset is to facilitate research that could minimize or prevent the harm caused by AI systems. ## Usage The code snippet below demonstrates how to load the QA-Classification dataset: ```python from datasets import load_dataset # Load the whole dataset dataset = load_dataset('PKU-Alignment/BeaverTails') # Load only the round 0 dataset round0_dataset = load_dataset('PKU-Alignment/BeaverTails', data_dir='round0') # Load the training dataset train_dataset = load_dataset('PKU-Alignment/BeaverTails', split='train') test_dataset = load_dataset('PKU-Alignment/BeaverTails', split='test') ``` ## Papers You can find more information in our Paper: - **Dataset Paper:** <https://arxiv.org/abs/2307.04657> ## Contact The original authors host this dataset on GitHub here: https://github.com/PKU-Alignment/beavertails ## License BeaverTails dataset and its family are released under the CC BY-NC 4.0 License.
AlignmentLab-AI/agentcode
2023-10-10T11:53:55.000Z
[ "region:us" ]
AlignmentLab-AI
null
null
null
2
181
Entry not found
NamCyan/thevault-docstringstyle
2023-09-15T18:55:54.000Z
[ "region:us" ]
NamCyan
null
null
null
0
181
--- dataset_info: features: - name: hexsha dtype: string - name: repo dtype: string - name: path dtype: string - name: license sequence: string - name: language dtype: string - name: identifier dtype: string - name: return_type dtype: string - name: original_string dtype: string - name: original_docstring dtype: string - name: docstring dtype: string - name: docstring_tokens sequence: string - name: code dtype: string - name: code_tokens sequence: string - name: short_docstring dtype: string - name: short_docstring_tokens sequence: string - name: comment sequence: string - name: parameters list: - name: param dtype: string - name: type dtype: string - name: docstring_params struct: - name: returns list: - name: docstring dtype: string - name: docstring_tokens sequence: string - name: type dtype: string - name: raises list: - name: docstring dtype: string - name: docstring_tokens sequence: string - name: type dtype: string - name: params list: - name: identifier dtype: string - name: type dtype: string - name: docstring dtype: string - name: docstring_tokens sequence: string - name: default dtype: string - name: is_optional dtype: bool - name: outlier_params list: - name: identifier dtype: string - name: type dtype: string - name: docstring dtype: string - name: docstring_tokens sequence: string - name: default dtype: string - name: is_optional dtype: bool - name: others list: - name: identifier dtype: string - name: docstring dtype: string - name: docstring_tokens sequence: string - name: instruction dtype: string splits: - name: train num_bytes: 6545943535 num_examples: 1261519 download_size: 1969238091 dataset_size: 6545943535 --- # Dataset Card for "thevault-docstringstyle" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SneakyInsect/maestro-rollingsplit
2023-10-04T13:21:21.000Z
[ "region:us" ]
SneakyInsect
null
null
null
0
181
--- dataset_info: features: - name: name dtype: string - name: start sequence: float64 - name: duration sequence: float64 - name: pitch sequence: float64 - name: velocity sequence: float64 splits: - name: train num_bytes: 745208510 num_examples: 373963 - name: validation num_bytes: 84002977 num_examples: 42153 - name: test num_bytes: 97390221 num_examples: 48820 download_size: 144295382 dataset_size: 926601708 --- # Dataset Card for "maestro-rollingsplit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mwsc
2023-04-05T13:33:22.000Z
[ "task_categories:multiple-choice", "task_ids:multiple-choice-coreference-resolution", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|winograd_wsc", "language:en", "license:cc-by-4.0", "arxiv:1806.08730", "region:us" ]
null
Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context. This modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
@article{McCann2018decaNLP, title={The Natural Language Decathlon: Multitask Learning as Question Answering}, author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher}, journal={arXiv preprint arXiv:1806.08730}, year={2018} }
null
0
180
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Modified Winograd Schema Challenge (MWSC) size_categories: - n<1K source_datasets: - extended|winograd_wsc task_categories: - multiple-choice task_ids: - multiple-choice-coreference-resolution paperswithcode_id: null dataset_info: features: - name: sentence dtype: string - name: question dtype: string - name: options sequence: string - name: answer dtype: string splits: - name: train num_bytes: 11022 num_examples: 80 - name: test num_bytes: 15220 num_examples: 100 - name: validation num_bytes: 13109 num_examples: 82 download_size: 19197 dataset_size: 39351 --- # Dataset Card for The modified Winograd Schema Challenge (MWSC) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://decanlp.com](http://decanlp.com) - **Repository:** https://github.com/salesforce/decaNLP - **Paper:** [The Natural Language Decathlon: Multitask Learning as Question Answering](https://arxiv.org/abs/1806.08730) - **Point of Contact:** [Bryan McCann](mailto:bmccann@salesforce.com), [Nitish Shirish Keskar](mailto:nkeskar@salesforce.com) - **Size of downloaded dataset files:** 19.20 kB - **Size of the generated dataset:** 39.35 kB - **Total amount of disk used:** 58.55 kB ### Dataset Summary Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context. This Modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 0.02 MB - **Size of the generated dataset:** 0.04 MB - **Total amount of disk used:** 0.06 MB An example looks as follows: ``` { "sentence": "The city councilmen refused the demonstrators a permit because they feared violence.", "question": "Who feared violence?", "options": [ "councilmen", "demonstrators" ], "answer": "councilmen" } ``` ### Data Fields The data fields are the same among all splits. #### default - `sentence`: a `string` feature. - `question`: a `string` feature. - `options`: a `list` of `string` features. - `answer`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 80| 82| 100| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Our code for running decaNLP has been open sourced under BSD-3-Clause. We chose to restrict decaNLP to datasets that were free and publicly accessible for research, but you should check their individual terms if you deviate from this use case. From the [Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html): > Both versions of the collections are licenced under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). ### Citation Information If you use this in your work, please cite: ``` @article{McCann2018decaNLP, title={The Natural Language Decathlon: Multitask Learning as Question Answering}, author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher}, journal={arXiv preprint arXiv:1806.08730}, year={2018} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
zest
2022-11-18T22:05:40.000Z
[ "task_categories:question-answering", "task_categories:token-classification", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "output-structure", "yes-no-qa", "arxiv:2011.08115", "region:us" ]
null
ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include classification, typed entity extraction and relationship extraction, and each task is paired with 20 different annotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize in five different ways.
@inproceedings{weller-etal-2020-learning, title = "Learning from Task Descriptions", author = "Weller, Orion and Lourie, Nicholas and Gardner, Matt and Peters, Matthew", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.105", pages = "1361--1375", abstract = "Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area. We instantiate this frame- work with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Formulating task descriptions as questions, we ensure each is general enough to apply to many possible inputs, thus comprehensively evaluating a model{'}s ability to solve each task. Moreover, the dataset{'}s structure tests specific types of systematic generalization. We find that the state-of-the-art T5 model achieves a score of 12% on ZEST, leaving a significant challenge for NLP researchers.", }
null
1
180
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering - token-classification task_ids: - closed-domain-qa - extractive-qa paperswithcode_id: zest pretty_name: ZEST tags: - output-structure - yes-no-qa dataset_info: features: - name: task_id dtype: string - name: question dtype: string - name: generalization_type dtype: string - name: derives_from sequence: string - name: domain dtype: string - name: context dtype: string - name: answer sequence: string - name: all_answers sequence: string splits: - name: train num_bytes: 9588987 num_examples: 10766 - name: validation num_bytes: 2056804 num_examples: 2280 - name: test num_bytes: 9280845 num_examples: 11980 download_size: 5796188 dataset_size: 20926636 --- # Dataset Card for "ZEST: ZEroShot learning from Task descriptions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://allenai.org/data/zest - **Repository:** https://github.com/allenai/zest - **Paper:** https://arxiv.org/abs/2011.08115 - **Leaderboard:** https://leaderboard.allenai.org/zest/submissions/public - **Point of Contact:** ### Dataset Summary ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include classification, typed entity extraction and relationship extraction, and each task is paired with 20 different annotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize in five different ways. ### Supported Tasks and Leaderboards A [leaderboard](https://leaderboard.allenai.org/zest/submissions/public) is included with accepatbility metrics for each of the four generalization types outlined in the paper. The metrics are novel acceptability metrics also proposed by the authors. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale To evaluate the ability of a model to generalize to unseen tasks based only on a task description in a zero-shot manner. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Mechanical Turk crowdsource workers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Mechanical Turk crowdsource workers. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset The dataset emphasizes a model's ability to generalize to unseen tasks with only a natural language description of the task. The long-term vision of this type of evaluation is to facilitate the creation of models which can perform arbitrary tasks with only a prompt from a non-technical user. This could broaden the frontier of what a user can ask something like a chatbot to do for them, but it is unclear how restrictions would be put in place to prevent users from prompting a system to perform unethical tasks. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @inproceedings{weller-etal-2020-learning, title = "Learning from Task Descriptions", author = "Weller, Orion and Lourie, Nicholas and Gardner, Matt and Peters, Matthew", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.105", pages = "1361--1375", abstract = "Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area. We instantiate this frame- work with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Formulating task descriptions as questions, we ensure each is general enough to apply to many possible inputs, thus comprehensively evaluating a model{'}s ability to solve each task. Moreover, the dataset{'}s structure tests specific types of systematic generalization. We find that the state-of-the-art T5 model achieves a score of 12% on ZEST, leaving a significant challenge for NLP researchers.", } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
yxchar/sciie-tlm
2021-11-05T02:04:05.000Z
[ "region:us" ]
yxchar
null
null
null
0
180
Entry not found
reasoning-machines/gsm-hard
2023-01-17T03:21:10.000Z
[ "task_categories:text2text-generation", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:gsm8k (https://huggingface.co/datasets/gsm8k)", "language:code", "license:mit", "math_reasoning", "symbolic_reasoning", "arxiv:2211.10435", "region:us" ]
reasoning-machines
null
null
null
12
180
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - mit multilinguality: - monolingual size_categories: - unknown source_datasets: - gsm8k (https://huggingface.co/datasets/gsm8k) task_categories: - text2text-generation task_ids: [] pretty_name: gsm-hard tags: - math_reasoning - symbolic_reasoning --- ## Dataset Description - **Repository:** https://reasonwithpal.com/ - **Paper:** [PaL: Program-Aided Language Model](https://arxiv.org/abs/2211.10435) ### Dataset Summary This is the harder version of gsm8k math reasoning dataset (https://huggingface.co/datasets/gsm8k). We construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.  ### Supported Tasks and Leaderboards This dataset is used to evaluate math reasoning ### Languages English - Numbers ## Dataset Structure ```python dataset = load_dataset("reasoning-machines/gsm-hard") DatasetDict({ train: Dataset({ features: ['input', 'code', 'target'], num_rows: 1319 }) }) ``` ### Data Fields train/dev/test: - input: The question - code: The corresponding code solution to the question - target: The answer ### Citation Information ``` @article{gao2022pal, title={PAL: Program-aided Language Models}, author={Gao, Luyu and Madaan, Aman and Zhou, Shuyan and Alon, Uri and Liu, Pengfei and Yang, Yiming and Callan, Jamie and Neubig, Graham}, journal={arXiv preprint arXiv:2211.10435}, year={2022} } ```
AnonymousSub/MedQuAD_47441_Question_Answer_Pairs
2023-03-09T15:02:29.000Z
[ "region:us" ]
AnonymousSub
null
null
null
4
180
--- dataset_info: features: - name: Questions dtype: string - name: Answers dtype: string splits: - name: train num_bytes: 24216623 num_examples: 47441 download_size: 9258859 dataset_size: 24216623 --- # Dataset Card for "MedQuAD_47441_Question_Answer_Pairs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
germank/shp_with_features_20k_flan_t5_large
2023-05-15T08:49:22.000Z
[ "region:us" ]
germank
null
null
null
0
180
--- dataset_info: features: - name: post_id dtype: string - name: domain dtype: string - name: upvote_ratio dtype: float64 - name: history dtype: string - name: c_root_id_A dtype: string - name: c_root_id_B dtype: string - name: created_at_utc_A dtype: int64 - name: created_at_utc_B dtype: int64 - name: score_A dtype: int64 - name: score_B dtype: int64 - name: human_ref_A dtype: string - name: human_ref_B dtype: string - name: labels dtype: int64 - name: seconds_difference dtype: float64 - name: score_ratio dtype: float64 - name: helpfulness_A dtype: float64 - name: helpfulness_B dtype: float64 - name: specificity_A dtype: float64 - name: specificity_B dtype: float64 - name: intent_A dtype: float64 - name: intent_B dtype: float64 - name: factuality_A dtype: float64 - name: factuality_B dtype: float64 - name: easy-to-understand_A dtype: float64 - name: easy-to-understand_B dtype: float64 - name: relevance_A dtype: float64 - name: relevance_B dtype: float64 - name: readability_A dtype: float64 - name: readability_B dtype: float64 - name: enough-detail_A dtype: float64 - name: enough-detail_B dtype: float64 - name: biased:_A dtype: float64 - name: biased:_B dtype: float64 - name: fail-to-consider-individual-preferences_A dtype: float64 - name: fail-to-consider-individual-preferences_B dtype: float64 - name: repetetive_A dtype: float64 - name: repetetive_B dtype: float64 - name: fail-to-consider-context_A dtype: float64 - name: fail-to-consider-context_B dtype: float64 - name: too-long_A dtype: float64 - name: too-long_B dtype: float64 - name: __index_level_0__ dtype: int64 - name: log_score_A dtype: float64 - name: log_score_B dtype: float64 splits: - name: test num_bytes: 20659940 num_examples: 9459 - name: train num_bytes: 20707062 num_examples: 9459 download_size: 23927350 dataset_size: 41367002 --- # Dataset Card for "shp_with_features_20k_flan_t5_large" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
abacusai/WikiQA-Free_Form_QA
2023-07-27T14:37:54.000Z
[ "region:us" ]
abacusai
null
null
null
7
180
--- configs: - config_name: default data_files: - split: 2k path: data/2k-* - split: 4k path: data/4k-* - split: 8k path: data/8k-* - split: 16k path: data/16k-* dataset_info: features: - name: conversations list: - name: from dtype: string - name: tok_len dtype: int64 - name: value dtype: string splits: - name: 2k num_bytes: 3555934 num_examples: 600 - name: 4k num_bytes: 6926324 num_examples: 600 - name: 8k num_bytes: 13605196 num_examples: 600 - name: 16k num_bytes: 24856440 num_examples: 600 download_size: 10741984 dataset_size: 48943894 --- # Dataset Card for "WikiQA-Free_Form_QA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kelm
2022-11-18T20:16:37.000Z
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "data-to-text-generation", "arxiv:2010.12688", "region:us" ]
null
Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text. The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations.
@misc{agarwal2020large, title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training}, author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou}, year={2020}, eprint={2010.12688}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
6
179
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: kelm pretty_name: Corpus for Knowledge-Enhanced Language Model Pre-training (KELM) tags: - data-to-text-generation dataset_info: features: - name: triple dtype: string - name: sentence dtype: string splits: - name: train num_bytes: 1343187306 num_examples: 6371131 - name: validation num_bytes: 167790917 num_examples: 796471 - name: test num_bytes: 167921750 num_examples: 796493 download_size: 1631259869 dataset_size: 1678899973 --- # Dataset Card for Corpus for Knowledge-Enhanced Language Model Pre-training (KELM) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/KELM-corpus - **Repository:** https://github.com/google-research-datasets/KELM-corpus - **Paper:** https://arxiv.org/abs/2010.12688 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text. The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations. ### Supported Tasks and Leaderboards The intended task is data-to-text generation, taking in a knowledge graph tuple and generating a natural language representation from it. Specifically, the data is in the format the authors used to train a seq2seq language model with the tuples concatenated into a single sequence. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances Each instance consists of one KG triple paired with corresponding natural language. ### Data Fields - `triple`: Wikipedia triples of the form `<subject> <relation> <object>` where some subjects have multiple relations, e.g. `<subject> <relation1> <object1> <relation2> <object2> <relation3> <object3>`. For more details on how these relations are grouped, please refer to the paper. - `sentence`: The corresponding Wikipedia sentence. ### Data Splits The dataset includes a pre-determined train, validation, and test split. ## Dataset Creation ### Curation Rationale The goal of the dataset's curation and the associated modeling work discussed in the paper is to be able to generate natural text from a knowledge graph. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The data is sourced from English Wikipedia and it's associated knowledge graph. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > Wikipedia has documented ideological, gender6, and racial biases in its text. While the KELM corpus may still contain some of these biases, certain types of biases may be reduced. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset has been released under the [CC BY-SA 2.0 license](https://creativecommons.org/licenses/by-sa/2.0/). ### Citation Information ``` @misc{agarwal2020large, title={Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training}, author={Oshin Agarwal and Heming Ge and Siamak Shakeri and Rami Al-Rfou}, year={2020}, eprint={2010.12688}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
gia-project/gia-dataset
2023-09-05T06:36:39.000Z
[ "task_categories:reinforcement-learning", "task_categories:text-generation", "task_categories:question-answering", "annotations_creators:found", "annotations_creators:machine-generated", "source_datasets:conceptual-captions", "source_datasets:ok-vqa", "source_datasets:oscar", "license:apache-2.0", "imitation-learning", "reinforcement-learning", "text-generation", "question-answering", "generalist-agent", "arxiv:2303.03915", "region:us" ]
gia-project
GIA dataset.
null
null
1
179
--- license: apache-2.0 tags: - imitation-learning - reinforcement-learning - text-generation - question-answering - generalist-agent annotations_creators: - found - machine-generated pretty_name: GIA-dataset size_categories: - {number_of_elements_in_dataset} # Example: n<1K, 100K<n<1M, … source_datasets: - conceptual-captions - ok-vqa - oscar task_categories: - reinforcement-learning # Atari, Mujoco, Metaworld, BabyAI - text-generation # OSCAR - question-answering # OK-VQA, conceptual-captions dataset_info: features: - name: text dtype: string - name: images dtype: image - name: image_observations sequence: dtype: image - name: text_observations sequence: dtype: string - name: discrete_observations sequence: sequence: dtype: int64 - name: continuous_observations sequence: sequence: dtype: float32 - name: continuous_actions sequence: sequence: dtype: float32 - name: discrete_actions sequence: dtype: int64 - name: rewards sequence: dtype: float32 splits: - name: train num_examples: - name: test num_examples: download_size: dataset_size: --- # GIA Dataset ## Dataset Description The GIA dataset combines a wide range of individual datasets. It includes expert demonstrations by expert RL agents, image and caption pairs, textual data and more. The GIA dataset is part of the GIA project, which aims to build a multimodal generalist agent. ### Usage ```python >>> from datasets import load_dataset >>> dataset = load_dataset("gia-project/gia-dataset", "metaworld-assembly") >>> first_episode = dataset["train"][0] >>> first_episode.keys() dict_keys(['continuous_observations', 'continuous_actions', 'rewards']) >>> len(first_episode["rewards"]) 500 >>> first_episode["continuous_actions"][0] [6.459120273590088, 2.2422609329223633, -5.914587020874023, -19.799840927124023] ``` ## Dataset Structure ### Data Instances <details> <summary>Click to expand the score information for each task</summary> The following table presents a comparative analysis of scores across various domains and tasks. The scores highlight the performance difference between a random agent and the episodes recorded in our dataset. | Task | Random Agent Score | Dataset Episode Score | | ----------------------------------- | :-----------------: | :-------------------: | | **Atari** | | | | atari-alien | 205.50 ± 111.97 | 16912.50 ± 7087.42 | | atari-amidar | 2.38 ± 2.50 | 2164.71 ± 1229.47 | | atari-assault | 262.50 ± 89.61 | 15699.12 ± 9572.12 | | atari-asterix | 213.50 ± 110.87 | 3699.62 ± 2421.30 | | atari-asteroids | 856.40 ± 434.32 | 177011.05 ± 35334.20 | | atari-atlantis | 17764.00 ± 6662.43 | 320679.59 ± 418247.37 | | atari-bankheist | 13.40 ± 11.07 | 1322.43 ± 60.84 | | atari-battlezone | 2170.00 ± 2121.58 | 295592.59 ± 161960.96 | | atari-beamrider | 357.28 ± 143.97 | 29589.35 ± 16132.96 | | atari-berzerk | 160.10 ± 118.87 | 57085.26 ± 13104.53 | | atari-bowling | 23.81 ± 6.07 | 20.40 ± 7.29 | | atari-boxing | 0.52 ± 4.37 | 97.97 ± 3.77 | | atari-breakout | 1.24 ± 1.30 | 702.97 ± 203.62 | | atari-centipede | 2150.06 ± 1113.28 | 11624.29 ± 4918.34 | | atari-choppercommand | 875.00 ± 416.98 | 90990.62 ± 270876.93 | | atari-crazyclimber | 7376.00 ± 2253.09 | 179296.94 ± 39862.06 | | atari-defender | 3417.50 ± 1443.41 | 351958.33 ± 40466.82 | | atari-demonattack | 165.55 ± 92.93 | 92195.25 ± 26174.79 | | atari-doubledunk | -18.54 ± 3.07 | 20.94 ± 3.65 | | atari-enduro | 0.00 ± 0.00 | 2292.22 ± 147.54 | | atari-fishingderby | -93.90 ± 3.51 | 7.18 ± 25.06 | | atari-freeway | 0.01 ± 0.10 | 33.88 ± 0.35 | | atari-frostbite | 67.60 ± 37.61 | 13196.12 ± 4341.00 | | atari-gopher | 319.40 ± 228.24 | 81676.15 ± 46329.48 | | atari-gravitar | 188.50 ± 203.33 | 3986.57 ± 1729.05 | | atari-hero | 475.25 ± 894.95 | 44677.35 ± 1754.42 | | atari-icehockey | -9.83 ± 3.24 | 25.17 ± 5.79 | | atari-jamesbond | 28.50 ± 45.42 | 27786.89 ± 33819.20 | | atari-kangaroo | 52.00 ± 108.15 | 574.05 ± 636.94 | | atari-krull | 1754.00 ± 583.56 | 11439.83 ± 1218.34 | | atari-kungfumaster | 390.00 ± 359.03 | 32392.81 ± 10006.55 | | atari-montezumarevenge | 0.00 ± 0.00 | 393.53 ± 50.45 | | atari-mspacman | 246.40 ± 121.22 | 6896.08 ± 2031.99 | | atari-namethisgame | 2447.40 ± 888.97 | 22991.18 ± 2473.15 | | atari-phoenix | 776.80 ± 635.86 | 424583.16 ± 97649.17 | | atari-pitfall | -259.75 ± 384.26 | -1.45 ± 4.50 | | atari-pong | -20.22 ± 0.95 | 20.99 ± 0.18 | | atari-privateeye | 41.65 ± 191.83 | 100.00 ± 0.00 | | atari-qbert | 164.25 ± 151.79 | 42971.37 ± 85070.72 | | atari-riverraid | 1474.40 ± 314.59 | 14800.94 ± 7924.56 | | atari-roadrunner | 11.00 ± 42.18 | 77942.80 ± 6088.62 | | atari-robotank | 1.87 ± 1.59 | 80.51 ± 13.28 | | atari-seaquest | 73.20 ± 57.91 | 2597.34 ± 386.09 | | atari-skiing | -16299.52 ± 1850.70 | -10738.06 ± 111.13 | | atari-solaris | 2360.40 ± 1852.03 | 1353.68 ± 516.96 | | atari-spaceinvaders | 137.20 ± 95.82 | 29425.29 ± 23623.89 | | atari-stargunner | 652.00 ± 312.24 | 360588.57 ± 49207.71 | | atari-surround | -9.99 ± 0.10 | 9.39 ± 0.85 | | atari-tennis | -23.95 ± 0.22 | 11.11 ± 7.57 | | atari-timepilot | 3396.00 ± 2128.85 | 69583.33 ± 29838.67 | | atari-tutankham | 12.73 ± 17.40 | 291.16 ± 30.37 | | atari-upndown | 358.90 ± 380.11 | 429418.33 ± 7187.43 | | atari-venture | 0.00 ± 0.00 | 0.00 ± 0.00 | | atari-videopinball | 23917.17 ± 19449.59 | 441507.92 ± 283264.62 | | atari-wizardofwor | 620.00 ± 837.85 | 49333.33 ± 16157.08 | | atari-yarsrevenge | 3503.91 ± 906.14 | 270262.86 ± 161815.96 | | atari-zaxxon | 21.00 ± 102.27 | 73097.22 ± 14825.77 | | **BabyAI** | | | | babyai-action-obj-door | 0.37 ± 0.39 | 0.99 ± 0.01 | | babyai-blocked-unlock-pickup | 0.00 ± 0.02 | 0.95 ± 0.01 | | babyai-boss-level | 0.06 ± 0.21 | 0.94 ± 0.05 | | babyai-boss-level-no-unlock | 0.06 ± 0.19 | 0.94 ± 0.05 | | babyai-find-obj-s5 | 0.08 ± 0.23 | 0.95 ± 0.04 | | babyai-go-to | 0.13 ± 0.29 | 0.92 ± 0.07 | | babyai-go-to-door | 0.45 ± 0.38 | 0.99 ± 0.00 | | babyai-go-to-imp-unlock | 0.08 ± 0.23 | | | babyai-go-to-local | 0.16 ± 0.30 | 0.93 ± 0.04 | | babyai-go-to-obj | 0.13 ± 0.27 | 0.93 ± 0.03 | | babyai-go-to-obj-door | 0.53 ± 0.39 | 0.99 ± 0.01 | | babyai-go-to-red-ball | 0.17 ± 0.30 | 0.93 ± 0.04 | | babyai-go-to-red-ball-grey | 0.12 ± 0.27 | 0.92 ± 0.05 | | babyai-go-to-red-ball-no-dists | 0.14 ± 0.28 | 0.93 ± 0.03 | | babyai-go-to-red-blue-ball | 0.12 ± 0.27 | 0.92 ± 0.05 | | babyai-go-to-seq | 0.08 ± 0.23 | 0.94 ± 0.05 | | babyai-key-corridor | 0.00 ± 0.00 | 0.91 ± 0.01 | | babyai-key-in-box | 0.01 ± 0.06 | | | babyai-mini-boss-level | 0.07 ± 0.21 | 0.89 ± 0.10 | | babyai-move-two-across-s8n9 | 0.00 ± 0.00 | 0.96 ± 0.01 | | babyai-one-room-s8 | 0.08 ± 0.21 | 0.92 ± 0.03 | | babyai-open | 0.10 ± 0.24 | 0.95 ± 0.05 | | babyai-open-door | 0.23 ± 0.34 | 0.99 ± 0.00 | | babyai-open-doors-order-n4 | 0.16 ± 0.30 | 0.99 ± 0.01 | | babyai-open-red-door | 0.08 ± 0.21 | 0.92 ± 0.03 | | babyai-open-two-doors | 0.08 ± 0.20 | 0.98 ± 0.00 | | babyai-pickup | 0.08 ± 0.22 | 0.92 ± 0.07 | | babyai-pickup-above | 0.02 ± 0.09 | 0.91 ± 0.07 | | babyai-pickup-dist | 0.10 ± 0.24 | 0.86 ± 0.21 | | babyai-pickup-loc | 0.08 ± 0.23 | 0.91 ± 0.04 | | babyai-synth | 0.11 ± 0.26 | 0.93 ± 0.06 | | babyai-synth-loc | 0.13 ± 0.29 | 0.94 ± 0.06 | | babyai-synth-seq | 0.07 ± 0.20 | 0.95 ± 0.04 | | babyai-unblock-pickup | 0.08 ± 0.22 | 0.91 ± 0.08 | | babyai-unlock | 0.03 ± 0.15 | | | babyai-unlock-local | 0.01 ± 0.09 | 0.98 ± 0.01 | | babyai-unlock-pickup | 0.00 ± 0.00 | 0.75 ± 0.04 | | babyai-unlock-to-unlock | 0.00 ± 0.00 | | | **MetaWorld** | | | | metaworld-assembly | 45.30 ± 4.13 | 245.99 ± 3.50 | | metaworld-basketball | 2.81 ± 1.24 | 627.99 ± 1.98 | | metaworld-bin-picking | 1.89 ± 0.45 | 425.58 ± 101.86 | | metaworld-box-close | 76.39 ± 17.91 | 512.49 ± 107.81 | | metaworld-button-press | 31.73 ± 5.20 | 643.10 ± 12.85 | | metaworld-button-press-topdown | 28.97 ± 10.37 | 490.18 ± 27.21 | | metaworld-button-press-topdown-wall | 29.04 ± 10.52 | 497.19 ± 31.37 | | metaworld-button-press-wall | 8.98 ± 3.99 | 675.41 ± 15.04 | | metaworld-coffee-button | 31.72 ± 6.36 | 731.08 ± 29.34 | | metaworld-coffee-pull | 4.09 ± 0.38 | 259.86 ± 88.48 | | metaworld-coffee-push | 4.17 ± 0.76 | 496.78 ± 118.20 | | metaworld-dial-turn | 29.64 ± 16.67 | 793.56 ± 80.06 | | metaworld-disassemble | 40.31 ± 7.53 | 42.83 ± 6.30 | | metaworld-door-close | 5.30 ± 1.33 | 529.75 ± 27.24 | | metaworld-door-lock | 112.35 ± 28.63 | 811.52 ± 34.07 | | metaworld-door-open | 56.37 ± 11.23 | 581.94 ± 19.67 | | metaworld-door-unlock | 94.17 ± 15.56 | 802.88 ± 17.05 | | metaworld-drawer-close | 116.73 ± 253.11 | 867.92 ± 4.48 | | metaworld-drawer-open | 126.85 ± 25.22 | 492.99 ± 2.52 | | metaworld-faucet-close | 253.12 ± 22.94 | 753.92 ± 13.42 | | metaworld-faucet-open | 244.10 ± 23.25 | 705.76 ± 7.15 | | metaworld-hammer | 95.33 ± 9.02 | 693.17 ± 34.62 | | metaworld-hand-insert | 2.75 ± 3.53 | 740.53 ± 36.69 | | metaworld-handle-press | 80.41 ± 110.19 | 855.91 ± 72.75 | | metaworld-handle-press-side | 57.00 ± 39.47 | 861.12 ± 20.01 | | metaworld-handle-pull | 10.34 ± 13.54 | 669.35 ± 24.81 | | metaworld-handle-pull-side | 2.13 ± 2.76 | 384.65 ± 102.89 | | metaworld-lever-pull | 60.31 ± 15.77 | 612.04 ± 38.85 | | metaworld-peg-insert-side | 1.71 ± 0.36 | 315.23 ± 140.07 | | metaworld-peg-unplug-side | 4.75 ± 2.83 | 456.12 ± 81.65 | | metaworld-pick-out-of-hole | 1.51 ± 0.24 | 219.61 ± 88.85 | | metaworld-pick-place | 1.61 ± 0.99 | 419.10 ± 98.19 | | metaworld-pick-place-wall | 0.00 ± 0.01 | 450.57 ± 64.10 | | metaworld-plate-slide | 74.64 ± 13.84 | 527.01 ± 155.34 | | metaworld-plate-slide-back | 33.47 ± 11.22 | 718.22 ± 87.41 | | metaworld-plate-slide-back-side | 34.34 ± 11.53 | 729.61 ± 69.15 | | metaworld-plate-slide-side | 22.61 ± 17.36 | 662.81 ± 102.81 | | metaworld-push | 5.51 ± 2.43 | 750.57 ± 43.98 | | metaworld-push-back | 1.21 ± 0.16 | 85.05 ± 107.12 | | metaworld-push-wall | 6.13 ± 3.17 | 748.87 ± 10.62 | | metaworld-reach | 149.67 ± 44.70 | 681.37 ± 133.68 | | metaworld-reach-wall | 143.26 ± 36.56 | 746.12 ± 104.19 | | metaworld-shelf-place | 0.00 ± 0.01 | 241.34 ± 24.60 | | metaworld-soccer | 5.66 ± 4.61 | 375.15 ± 140.24 | | metaworld-stick-pull | 2.64 ± 1.41 | 523.55 ± 18.94 | | metaworld-stick-push | 2.81 ± 1.04 | 627.95 ± 10.20 | | metaworld-sweep | 11.23 ± 7.28 | 494.85 ± 43.29 | | metaworld-sweep-into | 12.55 ± 10.72 | 799.21 ± 19.07 | | metaworld-window-close | 57.46 ± 7.11 | 591.30 ± 38.63 | | metaworld-window-open | 43.36 ± 2.09 | 590.82 ± 57.08 | | **MuJoCo** | | | | mujoco-ant | -59.95 ± 99.62 | 5846.42 ± 942.55 | | mujoco-doublependulum | 57.46 ± 17.54 | 9338.69 ± 352.61 | | mujoco-halfcheetah | -284.97 ± 79.83 | 7437.77 ± 173.30 | | mujoco-hopper | 18.38 ± 17.09 | 1858.73 ± 534.07 | | mujoco-humanoid | 122.02 ± 35.28 | 6281.02 ± 1795.84 | | mujoco-pendulum | 6.07 ± 3.47 | 475.40 ± 178.96 | | mujoco-pusher | -149.69 ± 7.41 | -25.21 ± 6.66 | | mujoco-reacher | -43.00 ± 3.91 | -5.68 ± 2.53 | | mujoco-standup | 33135.75 ± 2481.89 | 273574.16 ± 85253.26 | | mujoco-swimmer | 0.80 ± 10.71 | 92.18 ± 4.44 | | mujoco-walker | 2.68 ± 6.06 | 4631.22 ± 1059.01 | </details> ### Data Fields - `text`: a `string` feature - `images`: a `image` feature - `image_observations` : a `Sequence(image)` feature - `text_observations` : a `Sequence(string)` feature - `discrete_observations`: a `Sequence(Sequence(int64))` feature - `continuous_observations`: a `Sequence(Sequence(float32))` feature - `continuous_actions`: a `Sequence(Sequence(float32))` feature - `discrete_actions`: a `Sequence(int64)` feature - `rewards`: a `Sequence(float32)` feature ### Data Splits - `train`: `` examples - `test`: `` examples ## Dataset Creation This section describes how our dataset was created. We specifically detail how data for each domain and task were generated. The generation scripts are available in the [GIA repository](https://github.com/huggingface/gia). For RL tasks, we trained one agent per task using the [Sample Factory](https://www.samplefactory.dev). Then we used the trained agent to generate episodes. ### Atari We used the 57 [ALE/Atari](https://github.com/Farama-Foundation/Arcade-Learning-Environment) games as our environment, configuring the following parameters for our experiments. We rendered the images in grayscale with an 84x84 pixel resolution. The agent interacted with the environment every 4 frames. Sticky actions were not used, and the raw reward (no clipping) was reported. Episodes were stored as complete, i.e. with no termination on life loss. ### BabyAI We used BabyAI's implementation from [Minigrid](https://github.com/Farama-Foundation/Minigrid). We reused the [bot agent](https://github.com/mila-iqia/babyai) provided with BabyAI's paper and adapted it to the new Minigrid API. Using the bot, we generated 100.000 episodes for each of the 39 tasks of [Minigrid's BabyAI](https://minigrid.farama.org/environments/babyai/) and stored for each step: - the mission: str - the concatenation of the symbolic observation flattened and the direction: Array of integers of size (147,) - the action: integer - the reward: float ### Conceptual Captions The [Conceptual Captions](https://github.com/google-research-datasets/conceptual-captions/tree/master) dataset, offered by Google LLC, comprises pairs of image links and their corresponding captions. Each image has been downloaded and, when required, resized to ensure the maximum dimension does not exceed 352 pixels. ### MetaWorld We used the 50 tasks from [MetaWorld v2](https://github.com/Farama-Foundation/Metaworld). We constrained the episode to a duration of 100 timesteps, which is always sufficient to solve the task. ### MuJoCo We used the 11 environments of Gymnasium MuJoCo. ### OK-VQA The [OK-VQA](https://okvqa.allenai.org/index.html) dataset released by Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi was used. The data were formatted to match Hugging Face dataset's requirements and images were resized such that the largest dimension is at most 352. ### OSCAR We modified the "unshuffled_deduplicated_en" split of [OSCAR 2019](https://huggingface.co/datasets/oscar) dataset, initially put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary and licensed under [CC BY 4.0](https://oscar-project.github.io/documentation/versions/oscar-2019/#license). We cleaned and deduplicated the dataset using [the methods](https://github.com/bigscience-workshop/data-preparation/tree/main/preprocessing/training/01b_oscar_cleaning_and_filtering) and parameters used for the [ROOTS dataset](https://arxiv.org/abs/2303.03915) (Lurençon et al., 2023). The dataset was splitted into 30 even shards each cleaned and deduplicated independently before being concatenated again. ## Considerations for Using the Data ### Known Issues - Some BabyAI tasks are missing due to incompatibility with the training bot: - `babyai-key-in-box` - `babyai-go-to-imp-unlock` - `babyai-unlock-to-unlock` - `babyai-unlock` - For some atari tasks, the episode is too long, causing an `OverflowError` when loading the dataset: - `atari-enduro` - For some tasks, although the score can be higher than the random agent, we can't consider the task as solved: - `atari-bowling` - `atari-privateeye` - `atari-solaris` - `atari-venture` - `metaworld-bin-picking` - `metaworld-disassemble` - `metaworld-peg-insert-side` - `metaworld-plate-slide` - `metaworld-push-back` ### Future Developments We plan to expand the dataset to include the following additional domains: - [ ] DM Lab - [ ] Sokoban - [ ] Procgen - [ ] DM Control Suite (w and w/o pixels) ## Additional Information ### Licensing Information This dataset is release under the Apache 2.0 license. ### Citation Information ```bibtex @misc{gallouedec2023giadataset, title={GIA Dataset: A Multi-Modal, Multi-Task Learning Resource}, author={Gallouédec, Quentin and Beeching, Edward and Romac, Clément}, year={2023}, howpublished={\url{https://huggingface.co/datasets/gia-project/gia-dataset}}, note={Part of the GIA Project} } ``` ## Acknowledgment We would like to extend our sincere gratitude to: - [Shengyi Costa Huang](https://huggingface.co/vwxyzjn) for his invaluable assistance with the pretrained models used in this research
vjain/CBT
2023-05-27T13:01:00.000Z
[ "license:openrail", "region:us" ]
vjain
null
null
null
0
179
--- license: openrail ---
medal
2023-06-13T12:39:11.000Z
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:unknown", "disambiguation", "region:us" ]
null
A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example, DHF can be disambiguated to dihydrofolate, diastolic heart failure, dengue hemorragic fever or dihydroxyfumarate
@inproceedings{wen-etal-2020-medal, title = "{M}e{DAL}: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining", author = "Wen, Zhi and Lu, Xing Han and Reddy, Siva", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.15", pages = "130--135", abstract = "One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.", }
null
9
178
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: medal pretty_name: MeDAL tags: - disambiguation dataset_info: features: - name: abstract_id dtype: int32 - name: text dtype: string - name: location sequence: int32 - name: label sequence: string splits: - name: train num_bytes: 3573399948 num_examples: 3000000 - name: test num_bytes: 1190766821 num_examples: 1000000 - name: validation num_bytes: 1191410723 num_examples: 1000000 - name: full num_bytes: 15536883723 num_examples: 14393619 download_size: 21060929078 dataset_size: 21492461215 --- # Dataset Card for the MeDAL dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** https://github.com/BruceWen120/medal - **Paper:** https://www.aclweb.org/anthology/2020.clinicalnlp-1.15/ - **Dataset (Kaggle):** https://www.kaggle.com/xhlulu/medal-emnlp - **Dataset (Zenodo):** https://zenodo.org/record/4265632 - **Pretrained model:** https://huggingface.co/xhlu/electra-medal - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example, DHF can be disambiguated to dihydrofolate, diastolic heart failure, dengue hemorragic fever or dihydroxyfumarate ### Supported Tasks and Leaderboards Medical abbreviation disambiguation ### Languages English (en) ## Dataset Structure Each file is a table consisting of three columns: * text: The normalized content of an abstract * location: The location (index) of each abbreviation that was substituted * label: The word at that was substituted at the given location ### Data Instances An example from the train split is: ``` {'abstract_id': 14145090, 'text': 'velvet antlers vas are commonly used in traditional chinese medicine and invigorant and contain many PET components for health promotion the velvet antler peptide svap is one of active components in vas based on structural study the svap interacts with tgfβ receptors and disrupts the tgfβ pathway we hypothesized that svap prevents cardiac fibrosis from pressure overload by blocking tgfβ signaling SDRs underwent TAC tac or a sham operation T3 one month rats received either svap mgkgday or vehicle for an additional one month tac surgery induced significant cardiac dysfunction FB activation and fibrosis these effects were improved by treatment with svap in the heart tissue tac remarkably increased the expression of tgfβ and connective tissue growth factor ctgf ROS species C2 and the phosphorylation C2 of smad and ERK kinases erk svap inhibited the increases in reactive oxygen species C2 ctgf expression and the phosphorylation of smad and erk but not tgfβ expression in cultured cardiac fibroblasts angiotensin ii ang ii had similar effects compared to tac surgery such as increases in αsmapositive CFs and collagen synthesis svap eliminated these effects by disrupting tgfβ IB to its receptors and blocking ang iitgfβ downstream signaling these results demonstrated that svap has antifibrotic effects by blocking the tgfβ pathway in CFs', 'location': [63], 'label': ['transverse aortic constriction']} ``` ### Data Fields The column types are: * text: content of the abstract as a string * location: index of the substitution as an integer * label: substitued word as a string ### Data Splits The following files are present: * `full_data.csv`: The full dataset with all 14M abstracts. * `train.csv`: The subset used to train the baseline and proposed models. * `valid.csv`: The subset used to validate the model during training for hyperparameter selection. * `test.csv`: The subset used to evaluate the model and report the results in the tables. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The original dataset was retrieved and modified from the [NLM website](https://www.nlm.nih.gov/databases/download/pubmed_medline.html). #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations Details on how the abbreviations were created can be found in section 2.2 (Dataset Creation) of the [ACL ClinicalNLP paper](https://aclanthology.org/2020.clinicalnlp-1.15.pdf). #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases Since the abstracts are written in English, the data is biased towards anglo-centric medical research. If you plan to use a model pre-trained on this dataset for a predominantly non-English community, it is important to verify whether there are negative biases present in your model, and ensure that they are correctly mitigated. For instance, you could fine-tune your dataset on a multilingual medical disambiguation dataset, or collect a dataset specific to your use case. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The ELECTRA model is licensed under [Apache 2.0](https://github.com/google-research/electra/blob/master/LICENSE). The license for the libraries used in this project (`transformers`, `pytorch`, etc.) can be found in their respective GitHub repository. Our model is released under a MIT license. The original dataset was retrieved and modified from the [NLM website](https://www.nlm.nih.gov/databases/download/pubmed_medline.html). By using this dataset, you are bound by the [terms and conditions](https://www.nlm.nih.gov/databases/download/terms_and_conditions_pubmed.html) specified by NLM: > INTRODUCTION > > Downloading data from the National Library of Medicine FTP servers indicates your acceptance of the following Terms and Conditions: No charges, usage fees or royalties are paid to NLM for this data. > > MEDLINE/PUBMED SPECIFIC TERMS > > NLM freely provides PubMed/MEDLINE data. Please note some PubMed/MEDLINE abstracts may be protected by copyright. > > GENERAL TERMS AND CONDITIONS > > * Users of the data agree to: > * acknowledge NLM as the source of the data by including the phrase "Courtesy of the U.S. National Library of Medicine" in a clear and conspicuous manner, > * properly use registration and/or trademark symbols when referring to NLM products, and > * not indicate or imply that NLM has endorsed its products/services/applications. > > * Users who republish or redistribute the data (services, products or raw data) agree to: > * maintain the most current version of all distributed data, or > * make known in a clear and conspicuous manner that the products/services/applications do not reflect the most current/accurate data available from NLM. > > * These data are produced with a reasonable standard of care, but NLM makes no warranties express or implied, including no warranty of merchantability or fitness for particular purpose, regarding the accuracy or completeness of the data. Users agree to hold NLM and the U.S. Government harmless from any liability resulting from errors in the data. NLM disclaims any liability for any consequences due to use, misuse, or interpretation of information contained or not contained in the data. > > * NLM does not provide legal advice regarding copyright, fair use, or other aspects of intellectual property rights. See the NLM Copyright page. > > * NLM reserves the right to change the type and format of its machine-readable data. NLM will take reasonable steps to inform users of any changes to the format of the data before the data are distributed via the announcement section or subscription to email and RSS updates. ### Citation Information ``` @inproceedings{wen-etal-2020-medal, title = "{M}e{DAL}: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining", author = "Wen, Zhi and Lu, Xing Han and Reddy, Siva", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.15", pages = "130--135", abstract = "One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.", } ``` ### Contributions Thanks to [@Narsil](https://github.com/Narsil) and [@xhlulu](https://github.com/xhlulu)) for adding this dataset.
castorini/mr-tydi-corpus
2022-10-12T20:25:51.000Z
[ "task_categories:text-retrieval", "multilinguality:multilingual", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0", "region:us" ]
castorini
null
null
null
2
178
--- language: - ar - bn - en - fi - id - fi - ja - ko - ru - sw - te - th multilinguality: - multilingual task_categories: - text-retrieval license: apache-2.0 --- # Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations. This dataset stores documents of Mr. TyDi. To access the queries and judgments, please refer to [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi). # Dataset Structure The only configuration here is the `language`. As all three folds (train, dev and test) share the same corpus, there is only one fold 'train' under each language, unlike [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi). An example of document data entry looks as follows: ``` { 'docid': '25#0', 'title': 'Autism', 'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...' } ``` # Load Dataset An example to load the dataset: ``` language = 'english' dataset = load_dataset('castorini/mr-tydi-corpus', language, 'train') ``` # Citation Information ``` @article{mrtydi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } ```
bigbio/hallmarks_of_cancer
2022-12-22T15:44:44.000Z
[ "multilinguality:monolingual", "language:en", "license:gpl-3.0", "region:us" ]
bigbio
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
@article{DBLP:journals/bioinformatics/BakerSGAHSK16, author = {Simon Baker and Ilona Silins and Yufan Guo and Imran Ali and Johan H{\"{o}}gberg and Ulla Stenius and Anna Korhonen}, title = {Automatic semantic classification of scientific literature according to the hallmarks of cancer}, journal = {Bioinform.}, volume = {32}, number = {3}, pages = {432--440}, year = {2016}, url = {https://doi.org/10.1093/bioinformatics/btv585}, doi = {10.1093/bioinformatics/btv585}, timestamp = {Thu, 14 Oct 2021 08:57:44 +0200}, biburl = {https://dblp.org/rec/journals/bioinformatics/BakerSGAHSK16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
1
178
--- language: - en bigbio_language: - English license: gpl-3.0 multilinguality: monolingual bigbio_license_shortname: GPL_3p0 pretty_name: Hallmarks of Cancer homepage: https://github.com/sb895/Hallmarks-of-Cancer bigbio_pubmed: True bigbio_public: True bigbio_tasks: - TEXT_CLASSIFICATION --- # Dataset Card for Hallmarks of Cancer ## Dataset Description - **Homepage:** https://github.com/sb895/Hallmarks-of-Cancer - **Pubmed:** True - **Public:** True - **Tasks:** TXTCLASS The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID). ## Citation Information ``` @article{DBLP:journals/bioinformatics/BakerSGAHSK16, author = {Simon Baker and Ilona Silins and Yufan Guo and Imran Ali and Johan H{"{o}}gberg and Ulla Stenius and Anna Korhonen}, title = {Automatic semantic classification of scientific literature according to the hallmarks of cancer}, journal = {Bioinform.}, volume = {32}, number = {3}, pages = {432--440}, year = {2016}, url = {https://doi.org/10.1093/bioinformatics/btv585}, doi = {10.1093/bioinformatics/btv585}, timestamp = {Thu, 14 Oct 2021 08:57:44 +0200}, biburl = {https://dblp.org/rec/journals/bioinformatics/BakerSGAHSK16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
ajaykarthick/imdb-movie-reviews
2023-02-08T21:08:35.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:feature-extraction", "size_categories:10K<n<100K", "region:us" ]
ajaykarthick
null
null
null
1
178
--- task_categories: - text-classification - token-classification - feature-extraction pretty_name: Movie-Reviews size_categories: - 10K<n<100K --- # IMDB Movie Reviews ![movie_reivews](images/movie_reviews.jpg) This is a dataset for binary sentiment classification containing substantially huge data. This dataset contains a set of 50,000 highly polar movie reviews for training models for text classification tasks. The dataset is downloaded from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz This data is processed and splitted into training and test datasets (0.2% test split). Training dataset contains 40000 reviews and test dataset contains 10000 reviews. Equal distribution among the labels in both training and test dataset. in training dataset, there are 20000 records for both positive and negative classes. In test dataset, there are 5000 records both the labels. ### Citation Information ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ```
red_caps
2023-01-25T14:43:07.000Z
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2111.11431", "region:us" ]
null
RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset composition without labeling individual instances.
@misc{desai2021redcaps, title={RedCaps: web-curated image-text data created by the people, for the people}, author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson}, year={2021}, eprint={2111.11431}, archivePrefix={arXiv}, primaryClass={cs.CV} }
null
43
177
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - image-to-text task_ids: - image-captioning paperswithcode_id: redcaps pretty_name: RedCaps dataset_info: features: - name: image_id dtype: string - name: author dtype: string - name: image_url dtype: string - name: raw_caption dtype: string - name: caption dtype: string - name: subreddit dtype: class_label: names: '0': abandonedporn '1': abandoned '2': absoluteunits '3': airplants '4': alltheanimals '5': amateurphotography '6': amateurroomporn '7': animalporn '8': antiques '9': antkeeping '10': ants '11': aquariums '12': architectureporn '13': artefactporn '14': astronomy '15': astrophotography '16': australiancattledog '17': australianshepherd '18': autumnporn '19': averagebattlestations '20': awwducational '21': awwnverts '22': axolotls '23': backpacking '24': backyardchickens '25': baking '26': ballpython '27': barista '28': bassfishing '29': battlestations '30': bbq '31': beagle '32': beardeddragons '33': beekeeping '34': beerandpizza '35': beerporn '36': beerwithaview '37': beginnerwoodworking '38': bengalcats '39': bento '40': bernesemountaindogs '41': berries '42': bettafish '43': bicycling '44': bikecommuting '45': birding '46': birdphotography '47': birdpics '48': birdsofprey '49': birds '50': blackcats '51': blacksmith '52': bladesmith '53': boatporn '54': bonsai '55': bookporn '56': bookshelf '57': bordercollie '58': bostonterrier '59': botanicalporn '60': breadit '61': breakfastfood '62': breakfast '63': bridgeporn '64': brochet '65': budgetfood '66': budgies '67': bulldogs '68': burgers '69': butterflies '70': cabinporn '71': cactus '72': cakedecorating '73': cakewin '74': cameras '75': campingandhiking '76': camping '77': carnivorousplants '78': carpentry '79': carporn '80': cassetteculture '81': castiron '82': castles '83': casualknitting '84': catpictures '85': cats '86': ceramics '87': chameleons '88': charcuterie '89': cheesemaking '90': cheese '91': chefit '92': chefknives '93': chickens '94': chihuahua '95': chinchilla '96': chinesefood '97': churchporn '98': cider '99': cityporn '100': classiccars '101': cockatiel '102': cocktails '103': coffeestations '104': coins '105': cookiedecorating '106': corgi '107': cornsnakes '108': cozyplaces '109': crafts '110': crestedgecko '111': crochet '112': crossstitch '113': crows '114': crystals '115': cupcakes '116': dachshund '117': damnthatsinteresting '118': desertporn '119': designmyroom '120': desksetup '121': dessertporn '122': dessert '123': diy '124': dobermanpinscher '125': doggos '126': dogpictures '127': drunkencookery '128': duck '129': dumpsterdiving '130': earthporn '131': eatsandwiches '132': embroidery '133': entomology '134': equestrian '135': espresso '136': exposureporn '137': eyebleach '138': f1porn '139': farming '140': femalelivingspace '141': fermentation '142': ferrets '143': fireporn '144': fishing '145': fish '146': flowers '147': flyfishing '148': foodporn '149': food '150': foraging '151': fossilporn '152': fountainpens '153': foxes '154': frenchbulldogs '155': frogs '156': gardening '157': gardenwild '158': geckos '159': gemstones '160': geologyporn '161': germanshepherds '162': glutenfree '163': goldenretrievers '164': goldfish '165': gold '166': greatpyrenees '167': grilledcheese '168': grilling '169': guineapigs '170': gunporn '171': guns '172': hamsters '173': handtools '174': healthyfood '175': hedgehog '176': helicopters '177': herpetology '178': hiking '179': homestead '180': horses '181': hotpeppers '182': houseplants '183': houseporn '184': husky '185': icecreamery '186': indoorgarden '187': infrastructureporn '188': insects '189': instantpot '190': interestingasfuck '191': interiordesign '192': itookapicture '193': jellyfish '194': jewelry '195': kayakfishing '196': kayaking '197': ketorecipes '198': knifeporn '199': knives '200': labrador '201': leathercraft '202': leopardgeckos '203': lizards '204': lookatmydog '205': macarons '206': machineporn '207': macroporn '208': malelivingspace '209': mead '210': mealprepsunday '211': mechanicalkeyboards '212': mechanicalpencils '213': melts '214': metalworking '215': microgreens '216': microporn '217': mildlyinteresting '218': mineralporn '219': monitors '220': monstera '221': mostbeautiful '222': motorcycleporn '223': muglife '224': mushroomgrowers '225': mushroomporn '226': mushrooms '227': mycology '228': natureisfuckinglit '229': natureporn '230': nebelung '231': orchids '232': otters '233': outdoors '234': owls '235': parrots '236': pelletgrills '237': pens '238': perfectfit '239': permaculture '240': photocritique '241': photographs '242': pics '243': pitbulls '244': pizza '245': plantbaseddiet '246': plantedtank '247': plantsandpots '248': plants '249': pomeranians '250': pottery '251': pourpainting '252': proplifting '253': pugs '254': pug '255': quilting '256': rabbits '257': ramen '258': rarepuppers '259': reeftank '260': reptiles '261': resincasting '262': roomporn '263': roses '264': rottweiler '265': ruralporn '266': sailing '267': salsasnobs '268': samoyeds '269': savagegarden '270': scotch '271': seaporn '272': seriouseats '273': sewing '274': sharks '275': shiba '276': shihtzu '277': shrimptank '278': siamesecats '279': siberiancats '280': silverbugs '281': skyporn '282': sloths '283': smoking '284': snails '285': snakes '286': sneakers '287': sneks '288': somethingimade '289': soup '290': sourdough '291': sousvide '292': spaceporn '293': spicy '294': spiderbro '295': spiders '296': squirrels '297': steak '298': streetphotography '299': succulents '300': superbowl '301': supermodelcats '302': sushi '303': tacos '304': tarantulas '305': tastyfood '306': teaporn '307': tea '308': tequila '309': terrariums '310': thedepthsbelow '311': thriftstorehauls '312': tinyanimalsonfingers '313': tonightsdinner '314': toolporn '315': tools '316': torties '317': tortoise '318': tractors '319': trailrunning '320': trains '321': trucks '322': turtle '323': underwaterphotography '324': upcycling '325': urbanexploration '326': urbanhell '327': veganfoodporn '328': veganrecipes '329': vegetablegardening '330': vegetarian '331': villageporn '332': vintageaudio '333': vintage '334': vinyl '335': volumeeating '336': watches '337': waterporn '338': weatherporn '339': wewantplates '340': wildernessbackpacking '341': wildlifephotography '342': wine '343': winterporn '344': woodcarving '345': woodworking '346': workbenches '347': workspaces '348': yarnaddicts '349': zerowaste - name: score dtype: int32 - name: created_utc dtype: timestamp[s, tz=UTC] - name: permalink dtype: string - name: crosspost_parents sequence: string config_name: all splits: - name: train num_bytes: 3378544525 num_examples: 12011121 download_size: 1061908181 dataset_size: 3378544525 --- # Dataset Card for RedCaps ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [RedCaps homepage](https://redcaps.xyz/) - **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader) - **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431) - **Leaderboard:** - **Point of Contact:** [Karan Desai](mailto:kdexd@umich.edu) ### Dataset Summary RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset composition without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually unrelated images through a common semantic meaning (r/perfectfit). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` Some image links point to more than one image. You can process and downloaded those as follows: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import os import re import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 20 dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads}) ``` Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links. ### Supported Tasks and Leaderboards From the paper: > We have used our dataset to train deep neural networks that perform image captioning, and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). > We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis. ### Languages All of the subreddits in RedCaps use English as their primary language. ## Dataset Structure ### Data Instances Each instance in RedCaps represents a single Reddit image post: ``` { 'image_id': 'bpzj7r', 'author': 'djasz1', 'image_url': 'https://i.redd.it/ho0wntksivy21.jpg', 'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.', 'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3, 'score': 72, 'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41), 'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None } ``` ### Data Fields - `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit). - `author`: Reddit username of the image post author. - `image_url`: Static URL for downloading the image associated with the post. - `raw_caption`: Textual description of the image, written by the post author. - `caption`: Cleaned version of "raw_caption" by us (see Q35). - `subreddit`: Name of subreddit where the post was submitted. - `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost. - `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit. - `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>). - `crosspost_parents`: List of parent posts. This field is optional. ### Data Splits All the data is contained in training set. The training set has nearly 12M (12,011,111) instances. From the paper: > We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while the validation split is derived from downstream task(s). If users require a validation split, we recommend sampling it such that it follows the same subreddit distribution as entire dataset. ## Dataset Creation ### Curation Rationale From the paper: > Large datasets of image-text pairs are widely used for pre-training generic representations that transfer to a variety of downstream vision and vision-and-language tasks. Existing public datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is inefficient and diversity is artificially supressed. We argue that the quality of data depends on its source, and the human intent behind its creation. In this work, we explore Reddit – a social media platform, for curating high quality data. We introduce RedCaps – a large dataset of 12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection, better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation. ### Source Data #### Initial Data Collection and Normalization From the paper: > **Data Collection Pipeline** Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning. **Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits have their own rules, community norms, and moderators so curating subreddits allows us to steer the dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots, etc) and post titles tend to describe image content (rather than making jokes, political commentary, etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund), plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food (r/steak, r/macarons), scenery (r/cityporn1 , r/desertporn), or activities (r/carpentry, r/kayaking). In total we collect data from 350 subreddits; the full list can be found in Appendix A. **Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains: Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain multiple images (gallery posts) – in this case we only collect the first image and associate it with the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content. **Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following [29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets ((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc], image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram: @user], and other references (link in comments). Finally, like [31] we replace social media handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy. Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them, as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard captions without nouns or that don’t overlap image tags, we do not discard any instances in this step. Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is less resource-intensive than existing datasets – we do not require webpage crawlers, search engines, or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances. #### Who are the source language producers? Reddit is the singular data source for RedCaps. ### Annotations #### Annotation process The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators. #### Who are the annotators? The annotation process doesn't require any human annotators. ### Personal and Sensitive Information From the paper: > **Does the dataset relate to people?** The dataset pertains to people in that people wrote the captions and posted images to Reddit that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid large quantities of images containing people: (a) We collect data from manually curated subreddits in which most contain primarily pertains to animals, objects, places, or activities. We exclude all subreddits whose primary purpose is to share and describe images of people (such as celebrity photos or user selfies). (b) We use an off-the-shelf face detector to find and remove images with potential presence of human faces. We manually checked 50K random images in RedCaps (Q16) and found 79 images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images with identifiable people. Refer Section 2.2 in the main paper. > **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in combination with other data) from the dataset?** Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be used to look up the Reddit user profile, and some Reddit users may have identifying information in their profiles. Some images may contain human faces which could be identified by appearance. However, note that all this information is already public on Reddit, and searching it in RedCaps is no easier than searching directly on Reddit. > **Were the individuals in question notified about the data collection?** No. Reddit users are anonymous by default, and are not required to share their personal contact information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps image posts is by sending them private messages on Reddit. This is practically difficult to do manually, and will be classified as spam and blocked by Reddit if attempted to programmatically send a templated message to millions of users. > **Did the individuals in question consent to the collection and use of their data?** Users did not explicitly consent to the use of their data in our dataset. However, by uploading their data on Reddit, they consent that it would appear on the Reddit plaform and will be accessible via the official Reddit API (which we use to collect RedCaps). > **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?** Users have full control over the presence of their data in our dataset. If users wish to revoke their consent, they can delete the underlying Reddit post – it will be automatically removed dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request form on our dataset website for anybody to request removal of an individual instance if it is potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.). ## Considerations for Using the Data ### Social Impact of Dataset From the paper: > **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?** No. ### Discussion of Biases From the paper: > **Harmful Stereotypes**: Another concern with Reddit data is that images or language may represent harmful stereotypes about gender, race, or other characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35] whose training data includes at least 63K documents from banned or quarantined subreddits which may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways: > * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low precision (∼1%) – most detections are non-NSFW images with pink and beige hues. > * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels. > **Reddit demographics**: Reddit’s user demographics are not representative of the population at large. Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs 22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together, these demographic biases likely also bias the types of objects and places that appear in images on Reddit, and the language used to describe these images. We do not offer explicit countermeasures to these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51]. Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G. > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** The scale of RedCaps means that we are unable to verify the contents of all images and captions. However we have tried to minimize the possibility that RedCaps contains data that might be offensive, insulting, threatening, or might cause anxiety via the following mitigations: (a) We manually curate the set of subreddits from which to collect data; we only chose subreddits that are not marked NSFW and which generally contain non-offensive content. (b) Within our curated subreddits, we did not include any posts marked NSFW. (c) We removed all instances whose captions contained any of the 400 potentially offensive words or phrases. Refer Section 2.2 in the main paper. (d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector. We manually checked 50K random images in RedCaps and found one image containing nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper > **Does the dataset identify any subpopulations (e.g., by age, gender)?** RedCaps does not explicitly identify any subpopulations. Since some images contain people and captions are free-form natural language written by Reddit users, it is possible that some captions may identify people appearing in individual images as part of a subpopulation. > **Were any ethical review processes conducted (e.g., by an institutional review board)?** We did not conduct a formal ethical review process via institutional review boards. However, as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms to try and remove instances that could be problematic. ### Other Known Limitations From the paper: > **Are there any errors, sources of noise, or redundancies in the dataset?** RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured. Some instances may also have duplicate images and captions – Reddit users may have shared the same image post in multiple subreddits. Such redundancies constitute a very small fraction of the dataset, and should have almost no effect in training large-scale models. > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)?** No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps. ## Additional Information ### Dataset Curators From the paper: > Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps: Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. ### Licensing Information The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/ api-terms) and users must comply with Reddit User Agreeement, Content Policy, and Privacy Policy – all accessible at https://www.redditinc.com/policies. From the paper: > RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies. ### Citation Information ```bibtex @misc{desai2021redcaps, title={RedCaps: web-curated image-text data created by the people, for the people}, author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson}, year={2021}, eprint={2111.11431}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
bertin-project/alpaca-spanish
2023-03-24T11:38:19.000Z
[ "task_categories:text-generation", "language:es", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
bertin-project
null
null
null
18
177
--- license: cc-by-4.0 language: - es tags: - instruction-finetuning pretty_name: BERTIN Alpaca Spanish task_categories: - text-generation dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 21439975 num_examples: 51942 download_size: 13178075 dataset_size: 21439975 --- # BERTIN Alpaca Spanish This dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca). An [earlier version](https://huggingface.co/datasets/bertin-project/alpaca-spanish/blob/main/nllb/spa_train.json.gz) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI.
thefcraft/civitai-stable-diffusion-337k
2023-09-26T07:10:40.000Z
[ "annotations_creators:no-annotation", "language_creators:thefcraft", "size_categories:1M<n<10M", "source_datasets:civitai", "language:en", "region:us" ]
thefcraft
null
null
null
8
177
--- annotations_creators: - no-annotation language_creators: - thefcraft language: - en pretty_name: civitai-stable-diffusion-337k size_categories: - 1M<n<10M source_datasets: - civitai --- ### How to Use ``` from datasets import load_dataset dataset = load_dataset("thefcraft/civitai-stable-diffusion-337k") print(dataset['train'][0]) ``` ### download images download zip files from images dir https://huggingface.co/datasets/thefcraft/civitai-stable-diffusion-337k/tree/main/images it contains some images with id ``` from zipfile import ZipFile with ZipFile("filename.zip", 'r') as zObject: zObject.extractall() ``` ### Dataset Summary GitHub URL:- https://github.com/thefcraft/civitai-stable-diffusion-337k dataset:- civitai-stable-diffusion-337k this dataset contains 337k civitai images url with prompts etc. i use civitai api to get all prompts. project:- https://github.com/thefcraft/nsfw-prompt-detection-sd I train a model on this dataset DATA STRUCTURE for othertype/civitai.json:- ```{ 'items':[ {'id': 100657, 'url': 'https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2338276a-87f7-4a1e-f92a-776a18ee4200/width=768/2338276a-87f7-4a1e-f92a-776a18ee4200.jpeg', 'hash': 'U5Exz_00.8D$t89Z%M0100~VD*RktQxaIU~p', 'width': 768, 'height': 1368, 'nsfw': True, 'createdAt': '2023-02-14T10:05:11.498Z', 'postId': 60841, 'stats': {'cryCount': 0, 'laughCount': 0, 'likeCount': 26, 'dislikeCount': 0, 'heartCount': 50, 'commentCount': 4}, 'meta': {'ENSD': '31337', 'Size': '512x912', 'seed': 3994946333, 'Model': 'AbyssOrangeMix2_sfw', 'steps': 20, 'prompt': '<lora:hiqcg_body-epoch-000004:0.5>, <lora:hiqcg_face-epoch-000004:0.4>, hiqcgbody, hiqcgface, 1girl, full body, standing, \ndetailed skin texture, detailed cloth texture, beautiful detailed face,\nmasterpiece, best quality, ultra detailed, 8k, intricate details,', 'sampler': 'DPM++ 2M Karras', 'cfgScale': 7, 'Clip skip': '2', 'resources': [{'hash': '038ba203d8', 'name': 'AbyssOrangeMix2_sfw', 'type': 'model'}], 'Model hash': '038ba203d8', 'Hires upscale': '1.5', 'Hires upscaler': 'Latent', 'negativePrompt': 'EasyNegative, extra fingers,fewer fingers, multiple girls, multiple views,', 'Denoising strength': '0.6'}, 'username': 'NeoClassicalRibbon'}, {..}, ..], 'metadata':{'totalItems': 327145} } ```
ChilleD/MultiArith
2023-05-02T01:44:21.000Z
[ "region:us" ]
ChilleD
null
null
null
1
177
Entry not found
jinho8345/funsd
2023-07-29T09:06:10.000Z
[ "region:us" ]
jinho8345
null
null
null
0
177
--- dataset_info: features: - name: img dtype: image - name: filename dtype: string - name: boxes sequence: sequence: int64 - name: labels sequence: string - name: words list: list: - name: box sequence: int64 - name: text dtype: string - name: linkings sequence: sequence: sequence: int64 - name: ids sequence: int64 splits: - name: train num_bytes: 13690247.0 num_examples: 149 - name: test num_bytes: 4885049.0 num_examples: 50 download_size: 16731921 dataset_size: 18575296.0 --- # Dataset Card for "funsd" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
break_data
2023-04-05T09:42:04.000Z
[ "task_categories:text2text-generation", "task_ids:open-domain-abstractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
null
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format.
@article{Wolfson2020Break, title={Break It Down: A Question Understanding Benchmark}, author={Wolfson, Tomer and Geva, Mor and Gupta, Ankit and Gardner, Matt and Goldberg, Yoav and Deutch, Daniel and Berant, Jonathan}, journal={Transactions of the Association for Computational Linguistics}, year={2020}, }
null
0
176
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: - open-domain-abstractive-qa paperswithcode_id: break pretty_name: BREAK dataset_info: - config_name: QDMR-high-level features: - name: question_id dtype: string - name: question_text dtype: string - name: decomposition dtype: string - name: operators dtype: string - name: split dtype: string splits: - name: test num_bytes: 482339 num_examples: 3195 - name: train num_bytes: 5148086 num_examples: 17503 - name: validation num_bytes: 914780 num_examples: 3130 download_size: 15971078 dataset_size: 6545205 - config_name: QDMR-high-level-lexicon features: - name: source dtype: string - name: allowed_tokens dtype: string splits: - name: test num_bytes: 4240755 num_examples: 3195 - name: train num_bytes: 23234518 num_examples: 17503 - name: validation num_bytes: 4158679 num_examples: 3130 download_size: 15971078 dataset_size: 31633952 - config_name: QDMR features: - name: question_id dtype: string - name: question_text dtype: string - name: decomposition dtype: string - name: operators dtype: string - name: split dtype: string splits: - name: test num_bytes: 900632 num_examples: 8069 - name: train num_bytes: 12790466 num_examples: 44321 - name: validation num_bytes: 2237472 num_examples: 7760 download_size: 15971078 dataset_size: 15928570 - config_name: QDMR-lexicon features: - name: source dtype: string - name: allowed_tokens dtype: string splits: - name: test num_bytes: 10331822 num_examples: 8069 - name: train num_bytes: 56913064 num_examples: 44321 - name: validation num_bytes: 9936933 num_examples: 7760 download_size: 15971078 dataset_size: 77181819 - config_name: logical-forms features: - name: question_id dtype: string - name: question_text dtype: string - name: decomposition dtype: string - name: operators dtype: string - name: split dtype: string - name: program dtype: string splits: - name: test num_bytes: 927038 num_examples: 8006 - name: train num_bytes: 19821676 num_examples: 44098 - name: validation num_bytes: 3504893 num_examples: 7719 download_size: 15971078 dataset_size: 24253607 --- # Dataset Card for "break_data" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/allenai/Break](https://github.com/allenai/Break) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 79.86 MB - **Size of the generated dataset:** 155.55 MB - **Total amount of disk used:** 235.39 MB ### Dataset Summary Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### QDMR - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 15.93 MB - **Total amount of disk used:** 31.90 MB An example of 'validation' looks as follows. ``` { "decomposition": "return flights ;return #1 from denver ;return #2 to philadelphia ;return #3 if available", "operators": "['select', 'filter', 'filter', 'filter']", "question_id": "ATIS_dev_0", "question_text": "what flights are available tomorrow from denver to philadelphia ", "split": "dev" } ``` #### QDMR-high-level - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 6.54 MB - **Total amount of disk used:** 22.51 MB An example of 'train' looks as follows. ``` { "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", "operators": "['select', 'filter', 'filter', 'filter', 'project']", "question_id": "ATIS_dev_102", "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", "split": "dev" } ``` #### QDMR-high-level-lexicon - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 31.64 MB - **Total amount of disk used:** 47.61 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'he', 'distinct', 'House', 'two', 'at least', 'or ', 'date', 'o...", "source": "What office, also held by a member of the Maine House of Representatives, did James K. Polk hold before he was president?" } ``` #### QDMR-lexicon - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 77.19 MB - **Total amount of disk used:** 93.16 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'distinct', 'two', 'at least', 'or ', 'date', 'on ', '@@14@@', ...", "source": "what flights are available tomorrow from denver to philadelphia " } ``` #### logical-forms - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 24.25 MB - **Total amount of disk used:** 40.22 MB An example of 'train' looks as follows. ``` { "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", "operators": "['select', 'filter', 'filter', 'filter', 'project']", "program": "some program", "question_id": "ATIS_dev_102", "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", "split": "dev" } ``` ### Data Fields The data fields are the same among all splits. #### QDMR - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. #### QDMR-high-level - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. #### QDMR-high-level-lexicon - `source`: a `string` feature. - `allowed_tokens`: a `string` feature. #### QDMR-lexicon - `source`: a `string` feature. - `allowed_tokens`: a `string` feature. #### logical-forms - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. - `program`: a `string` feature. ### Data Splits | name |train|validation|test| |-----------------------|----:|---------:|---:| |QDMR |44321| 7760|8069| |QDMR-high-level |17503| 3130|3195| |QDMR-high-level-lexicon|17503| 3130|3195| |QDMR-lexicon |44321| 7760|8069| |logical-forms |44098| 7719|8006| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{Wolfson2020Break, title={Break It Down: A Question Understanding Benchmark}, author={Wolfson, Tomer and Geva, Mor and Gupta, Ankit and Gardner, Matt and Goldberg, Yoav and Deutch, Daniel and Berant, Jonathan}, journal={Transactions of the Association for Computational Linguistics}, year={2020}, } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
kinnews_kirnews
2023-06-01T14:59:50.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:rn", "language:rw", "license:mit", "arxiv:2010.12174", "region:us" ]
null
Kinyarwanda and Kirundi news classification datasets
@article{niyongabo2020kinnews, title={KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for Kinyarwanda and Kirundi}, author={Niyongabo, Rubungo Andre and Qu, Hong and Kreutzer, Julia and Huang, Li}, journal={arXiv preprint arXiv:2010.12174}, year={2020} }
null
1
176
--- annotations_creators: - expert-generated language_creators: - found language: - rn - rw license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - topic-classification paperswithcode_id: kinnews-and-kirnews pretty_name: KinnewsKirnews dataset_info: - config_name: kinnews_raw features: - name: label dtype: class_label: names: '0': politics '1': sport '2': economy '3': health '4': entertainment '5': history '6': technology '7': tourism '8': culture '9': fashion '10': religion '11': environment '12': education '13': relationship - name: kin_label dtype: string - name: en_label dtype: string - name: url dtype: string - name: title dtype: string - name: content dtype: string splits: - name: train num_bytes: 38316546 num_examples: 17014 - name: test num_bytes: 11971938 num_examples: 4254 download_size: 27377755 dataset_size: 50288484 - config_name: kinnews_cleaned features: - name: label dtype: class_label: names: '0': politics '1': sport '2': economy '3': health '4': entertainment '5': history '6': technology '7': tourism '8': culture '9': fashion '10': religion '11': environment '12': education '13': relationship - name: title dtype: string - name: content dtype: string splits: - name: train num_bytes: 32780382 num_examples: 17014 - name: test num_bytes: 8217453 num_examples: 4254 download_size: 27377755 dataset_size: 40997835 - config_name: kirnews_raw features: - name: label dtype: class_label: names: '0': politics '1': sport '2': economy '3': health '4': entertainment '5': history '6': technology '7': tourism '8': culture '9': fashion '10': religion '11': environment '12': education '13': relationship - name: kir_label dtype: string - name: en_label dtype: string - name: url dtype: string - name: title dtype: string - name: content dtype: string splits: - name: train num_bytes: 7343223 num_examples: 3689 - name: test num_bytes: 2499189 num_examples: 923 download_size: 5186111 dataset_size: 9842412 - config_name: kirnews_cleaned features: - name: label dtype: class_label: names: '0': politics '1': sport '2': economy '3': health '4': entertainment '5': history '6': technology '7': tourism '8': culture '9': fashion '10': religion '11': environment '12': education '13': relationship - name: title dtype: string - name: content dtype: string splits: - name: train num_bytes: 6629767 num_examples: 3689 - name: test num_bytes: 1570745 num_examples: 923 download_size: 5186111 dataset_size: 8200512 config_names: - kinnews_cleaned - kinnews_raw - kirnews_cleaned - kirnews_raw --- # Dataset Card for kinnews_kirnews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed] - **Repository:** https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus - **Paper:** [KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for Kinyarwanda and Kirundi](https://arxiv.org/abs/2010.12174) - **Leaderboard:** NA - **Point of Contact:** [Rubungo Andre Niyongabo1](mailto:niyongabor.andre@std.uestc.edu.cn) ### Dataset Summary Kinyarwanda and Kirundi news classification datasets (KINNEWS and KIRNEWS,respectively), which were both collected from Rwanda and Burundi news websites and newspapers, for low-resource monolingual and cross-lingual multiclass classification tasks. ### Supported Tasks and Leaderboards This dataset can be used for text classification of news articles in Kinyarwadi and Kirundi languages. Each news article can be classified into one of the 14 possible classes. The classes are: - politics - sport - economy - health - entertainment - history - technology - culture - religion - environment - education - relationship ### Languages Kinyarwanda and Kirundi ## Dataset Structure ### Data Instances Here is an example from the dataset: | Field | Value | | ----- | ----------- | | label | 1 | | kin_label/kir_label | 'inkino' | | url | 'https://nawe.bi/Primus-Ligue-Imirwi-igiye-guhura-gute-ku-ndwi-ya-6-y-ihiganwa.html' | | title | 'Primus Ligue\xa0: Imirwi igiye guhura gute ku ndwi ya 6 y’ihiganwa\xa0?'| | content | ' Inkino zitegekanijwe kuruno wa gatandatu igenekerezo rya 14 Nyakanga umwaka wa 2019...'| | en_label| 'sport'| ### Data Fields The raw version of the data for Kinyarwanda language consists of these fields - label: The category of the news article - kin_label/kir_label: The associated label in Kinyarwanda/Kirundi language - en_label: The associated label in English - url: The URL of the news article - title: The title of the news article - content: The content of the news article The cleaned version contains only the `label`, `title` and the `content` fields ### Data Splits Lang| Train | Test | |---| ----- | ---- | |Kinyarwandai Raw|17014|4254| |Kinyarwandai Clean|17014|4254| |Kirundi Raw|3689|923| |Kirundi Clean|3689|923| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{niyongabo2020kinnews, title={KINNEWS and KIRNEWS: Benchmarking Cross-Lingual Text Classification for Kinyarwanda and Kirundi}, author={Niyongabo, Rubungo Andre and Qu, Hong and Kreutzer, Julia and Huang, Li}, journal={arXiv preprint arXiv:2010.12174}, year={2020} } ``` ### Contributions Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
search_qa
2023-06-16T09:03:21.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1704.05179", "region:us" ]
null
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
null
null
10
176
--- annotations_creators: - found language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: SearchQA size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: searchqa dataset_info: - config_name: raw_jeopardy features: - name: category dtype: string - name: air_date dtype: string - name: question dtype: string - name: value dtype: string - name: answer dtype: string - name: round dtype: string - name: show_number dtype: int32 - name: search_results sequence: - name: urls dtype: string - name: snippets dtype: string - name: titles dtype: string - name: related_links dtype: string splits: - name: train num_bytes: 7770972348 num_examples: 216757 download_size: 3314386157 dataset_size: 7770972348 - config_name: train_test_val features: - name: category dtype: string - name: air_date dtype: string - name: question dtype: string - name: value dtype: string - name: answer dtype: string - name: round dtype: string - name: show_number dtype: int32 - name: search_results sequence: - name: urls dtype: string - name: snippets dtype: string - name: titles dtype: string - name: related_links dtype: string splits: - name: train num_bytes: 5303005740 num_examples: 151295 - name: test num_bytes: 1466749978 num_examples: 43228 - name: validation num_bytes: 740962715 num_examples: 21613 download_size: 3148550732 dataset_size: 7510718433 --- # Dataset Card for "search_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/nyu-dl/dl4ir-searchQA - **Paper:** [SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine](https://arxiv.org/abs/1704.05179) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 6.46 GB - **Size of the generated dataset:** 15.28 GB - **Total amount of disk used:** 21.74 GB ### Dataset Summary We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### raw_jeopardy - **Size of downloaded dataset files:** 3.31 GB - **Size of the generated dataset:** 7.77 GB - **Total amount of disk used:** 11.09 GB An example of 'train' looks as follows. ``` ``` #### train_test_val - **Size of downloaded dataset files:** 3.15 GB - **Size of the generated dataset:** 7.51 GB - **Total amount of disk used:** 10.66 GB An example of 'validation' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### raw_jeopardy - `category`: a `string` feature. - `air_date`: a `string` feature. - `question`: a `string` feature. - `value`: a `string` feature. - `answer`: a `string` feature. - `round`: a `string` feature. - `show_number`: a `int32` feature. - `search_results`: a dictionary feature containing: - `urls`: a `string` feature. - `snippets`: a `string` feature. - `titles`: a `string` feature. - `related_links`: a `string` feature. #### train_test_val - `category`: a `string` feature. - `air_date`: a `string` feature. - `question`: a `string` feature. - `value`: a `string` feature. - `answer`: a `string` feature. - `round`: a `string` feature. - `show_number`: a `int32` feature. - `search_results`: a dictionary feature containing: - `urls`: a `string` feature. - `snippets`: a `string` feature. - `titles`: a `string` feature. - `related_links`: a `string` feature. ### Data Splits #### raw_jeopardy | |train | |------------|-----:| |raw_jeopardy|216757| #### train_test_val | |train |validation|test | |--------------|-----:|---------:|----:| |train_test_val|151295| 21613|43228| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/DunnSHGCC17, author = {Matthew Dunn and Levent Sagun and Mike Higgins and V. Ugur G{"{u}}ney and Volkan Cirik and Kyunghyun Cho}, title = {SearchQA: {A} New Q{\&}A Dataset Augmented with Context from a Search Engine}, journal = {CoRR}, volume = {abs/1704.05179}, year = {2017}, url = {http://arxiv.org/abs/1704.05179}, archivePrefix = {arXiv}, eprint = {1704.05179}, timestamp = {Mon, 13 Aug 2018 16:47:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/DunnSHGCC17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
nielsr/CelebA-faces
2022-03-21T14:48:37.000Z
[ "region:us" ]
nielsr
null
null
null
3
176
Entry not found
EleutherAI/pythia-memorized-evals
2023-03-14T15:12:36.000Z
[ "region:us" ]
EleutherAI
null
null
null
1
176
--- dataset_info: features: - name: index dtype: int64 - name: tokens sequence: int64 - name: __index_level_0__ dtype: int64 splits: - name: duped.1.4b num_bytes: 730820104 num_examples: 1373722 - name: deduped.1.4b num_bytes: 557587604 num_examples: 1048097 - name: duped.160m num_bytes: 366906036 num_examples: 689673 - name: deduped.160m num_bytes: 309195740 num_examples: 581195 - name: duped.12b num_bytes: 1267397432 num_examples: 2382326 - name: deduped.12b num_bytes: 995486380 num_examples: 1871215 - name: duped.70m num_bytes: 246822996 num_examples: 463953 - name: deduped.70m num_bytes: 218890336 num_examples: 411448 - name: duped.2.8b num_bytes: 891140964 num_examples: 1675077 - name: deduped.2.8b num_bytes: 720972252 num_examples: 1355211 - name: duped.410m num_bytes: 516221412 num_examples: 970341 - name: deduped.410m num_bytes: 431472748 num_examples: 811039 - name: duped.6.9b num_bytes: 1128355508 num_examples: 2120969 - name: deduped.6.9b num_bytes: 893916408 num_examples: 1680294 - name: duped.1b num_bytes: 668267012 num_examples: 1256141 - name: deduped.1b num_bytes: 549484180 num_examples: 1032865 download_size: 2931941971 dataset_size: 10492937112 --- # Dataset Card for "pythia-memorized-evals" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vjain/Therapy
2023-05-16T22:31:22.000Z
[ "region:us" ]
vjain
null
null
null
1
176
Entry not found
result-kand2-sdxl-wuerst-karlo/040dec0a
2023-10-03T08:51:41.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
176
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 160 num_examples: 10 download_size: 1292 dataset_size: 160 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "040dec0a" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/f4d8fc49
2023-10-03T08:54:45.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
176
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 159 num_examples: 10 download_size: 1306 dataset_size: 159 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "f4d8fc49" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atasoglu/databricks-dolly-15k-tr
2023-05-01T10:30:39.000Z
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:tr", "license:cc-by-sa-3.0", "region:us" ]
atasoglu
null
null
null
6
175
--- license: cc-by-sa-3.0 task_categories: - question-answering language: - tr pretty_name: databricks-dolly-15k-tr size_categories: - 10K<n<100K --- This dataset is machine-translated version of [databricks-dolly-15k.jsonl](https://github.com/databrickslabs/dolly/tree/master/data) into Turkish. Used `googletrans==3.1.0a0` to translation.
Mike0307/MNIST-M
2023-07-04T15:10:25.000Z
[ "license:mit", "region:us" ]
Mike0307
null
null
null
0
175
--- license: mit dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' '4': '4' '5': '5' '6': '6' '7': '7' '8': '8' '9': '9' splits: - name: train num_bytes: 119131988.027 num_examples: 59001 - name: test num_bytes: 18049625.166 num_examples: 9001 download_size: 143468539 dataset_size: 137181613.193 ---
FreedomIntelligence/CMB
2023-08-19T09:45:53.000Z
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "medical", "biology", "chemistry", "region:us" ]
FreedomIntelligence
Chinese Medical Benchmark
coming soon~
null
5
175
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - zh tags: - medical - biology - chemistry size_categories: - 100K<n<1M --- # CMB: A Comprehensive Medical Benchmark in Chinese ![CMB](assets/title.png) <p align="center"> 🌐 <a href="https://cmedbenchmark.llmzoo.com/#home" target="_blank">Website</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/CMB" target="_blank">HuggingFace</a> ## 🌈 Update * **[2023.08.01]** 🎉🎉🎉 CMB is published!🎉🎉🎉 ## 🌐 Download Data - (Recommended) Download the [zip file](https://github.com/FreedomIntelligence/CMB/tree/main/data) and unzip: ```bash git clone "https://github.com/FreedomIntelligence/CMB.git" && cd CMB && unzip "./data/CMB.zip" -d "./data/" && rm "./data/CMB.zip" ``` - Or load our data as follows: ```python from datasets import load_dataset # CMB-Exam datasets (multiple-choice and multiple-answer questions) exam_datasets = load_dataset('FreedomIntelligence/CMB','exam') # CMB-Clin datasets clin_datasets = load_dataset('FreedomIntelligence/CMB','clin') ``` ## 🥇 Leaderboard Please Check [Leaderboard](https://cmedbenchmark.llmzoo.com/static/leaderboard.html). ## 🥸 Dataset intro ![CMB](assets/CMB-2.svg) ### Components - CMB-Exam: Comprehensive multi-level assessment for medical knowledge - Structure: 6 major categories and 28 subcategories, [View Catalog](catalog.md) - CMB-test: 400 questions per subcategories, 11200 questions in total - CMB-val: 280 questions with solutions and explanations; used as source for CoT and few-shot - CMB-train: 269359 questions for medical knowledge injection - CMB-Clin: 74 cases of complex medical inquires ### CMB-Exam Item ```json { "exam_type": "医师考试", "exam_class": "执业医师", "exam_subject": "口腔执业医师", "question": "患者,男性,11岁。近2个月来时有低热(37~38℃),全身无明显症状。查体无明显阳性体征。X线检查发现右肺中部有一直径约0.8cm类圆形病灶,边缘稍模糊,肺门淋巴结肿大。此男孩可能患", "answer": "D", "question_type": "单项选择题", "option": { "A": "小叶型肺炎", "B": "浸润性肺结核", "C": "继发性肺结核", "D": "原发性肺结核", "E": "粟粒型肺结核" } }, ``` - exam_type: major category - exam_class: sub-category - exam_subject: Specific departments or subdivisions of disciplines - question_type: *multiple-choice (单项选择题)* or *multiple-answer (多项选择题)* ### CMB-Clin Item ```json { "id": 0, "title": "案例分析-腹外疝", "description": "现病史\n(1)病史摘要\n 病人,男,49岁,3小时前解大便后出现右下腹疼痛,右下腹可触及一包块,既往体健。\n(2)主诉\n 右下腹痛并自扪及包块3小时。\n\n体格检查\n体温: T 37.8℃,P 101次/分,呼吸22次/分,BP 100/60mmHg,腹软,未见胃肠型蠕动波,肝脾肋下未及,于右侧腹股沟区可扪及一圆形肿块,约4cm×4cm大小,有压痛、界欠清,且肿块位于腹股沟韧带上内方。\n\n辅助检查\n(1)实验室检查\n 血常规:WBC 5.0×109/L,N 78%。\n 尿常规正常。\n(2)多普勒超声检查\n 沿腹股沟纵切可见一多层分布的混合回声区,宽窄不等,远端膨大,边界整齐,长约4~5cm。\n(3)腹部X线检查\n 可见阶梯状液气平。", "QA_pairs": [ { "question": "简述该病人的诊断及诊断依据。", "solution": "诊断:嵌顿性腹股沟斜疝合并肠梗阻。\n诊断依据:\n①右下腹痛并自扪及包块3小时;\n②有腹胀、呕吐,类似肠梗阻表现;腹部平片可见阶梯状液平,考虑肠梗阻可能;腹部B超考虑,\n腹部包块内可能为肠管可能;\n③有轻度毒性反应或是中毒反应,如 T 37.8℃,P 101次/分,白细胞中性分类78%;\n④腹股沟区包块位于腹股沟韧带上内方。" }, { "question": "简述该病人的鉴别诊断。", "solution": "(1)睾丸鞘膜积液:鞘膜积液所呈现的肿块完全局限在阴囊内,其上界可以清楚地摸到;用透光试验检查肿块,鞘膜积液多为透光(阳性),而疝块则不能透光。\n(2)交通性鞘膜积液:肿块的外形与睾丸鞘膜积液相似。于每日起床后或站立活动时肿块缓慢地出现并增大。平卧或睡觉后肿块逐渐缩小,挤压肿块,其体积也可逐渐缩小。透光试验为阳性。\n(3)精索鞘膜积液:肿块较小,在腹股沟管内,牵拉同侧睾丸可见肿块移动。\n(4)隐睾:腹股沟管内下降不全的睾丸可被误诊为斜疝或精索鞘膜积液。隐睾肿块较小,挤压时可出现特有的胀痛感觉。如患侧阴囊内睾丸缺如,则诊断更为明确。\n(5)急性肠梗阻:肠管被嵌顿的疝可伴发急性肠梗阻,但不应仅满足于肠梗阻的诊断而忽略疝的存在;尤其是病人比较肥胖或疝块较小时,更易发生这类问题而导致治疗上的错误。\n(6)此外,腹股沟区肿块还应与以下疾病鉴别:肿大的淋巴结、动(静)脉瘤、软组织肿瘤、脓肿、\n圆韧带囊肿、子宫内膜异位症等。" }, { "question": "简述该病人的治疗原则。", "solution": "嵌顿性疝原则上需要紧急手术治疗,以防止疝内容物坏死并解除伴发的肠梗阻。术前应做好必要的准备,如有脱水和电解质紊乱,应迅速补液加以纠正。手术的关键在于正确判断疝内容物的活力,然后根据病情确定处理方法。在扩张或切开疝环、解除疝环压迫的前提下,凡肠管呈紫黑色,失去光泽和弹性,刺激后无蠕动和相应肠系膜内无动脉搏动者,即可判定为肠坏死。如肠管尚未坏死,则可将其送回腹腔,按一般易复性疝处理,即行疝囊高位结扎+疝修补术。如肠管确已坏死或一时不能肯定肠管是否已失去活力时,则应在病人全身情况允许的前提下,切除该段肠管并进行一期吻合。凡施行肠切除吻合术的病人,因手术区污染,在高位结扎疝囊后,一般不宜作疝修补术,以免因感染而致修补失败。" } ] }, ``` - title: name of disease - description: information of patient - QA_pairs: a series of questions and their solutions based on the description ## ℹ️ How to evaluate and submit refer to [link](https://github.com/FreedomIntelligence/CMB) ## 😘 Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{cmedbenchmark, title={CMB: Chinese Medical Benchmark}, author={Xidong Wang*, Guiming Hardy Chen*, Dingjie Song*, Zhiyi Zhang*, Qingying Xiao, Xiangbo Wu, Feng Jiang, Jianquan Li, Benyou Wang}, note={Xidong Wang, Guiming Hardy Chen, Dingjie Song, and Zhiyi Zhang contributed equally to this github repo.}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/FreedomIntelligence/CMB}}, } ``` ## Acknowledgement - We thank [Shenzhen Research Institute of Big Data](http://www.sribd.cn/) for their enormous support for this project. - We thank the following doctors for participating in the human evaluation of CMB-Clin: - 林士军 (香港中文大学(深圳)附属第二医院) - 常河 - 许晓爽
kandriiashevskyi/wix_looker_ai
2023-10-10T09:12:52.000Z
[ "region:us" ]
kandriiashevskyi
null
null
null
0
175
Entry not found
emozilla/pg19
2023-10-09T15:06:39.000Z
[ "region:us" ]
emozilla
null
null
null
3
175
--- dataset_info: features: - name: short_book_title dtype: string - name: publication_date dtype: int32 - name: url dtype: string - name: text dtype: string splits: - name: train num_bytes: 11453688452 num_examples: 28602 - name: validation num_bytes: 17402295 num_examples: 50 - name: test num_bytes: 40482852 num_examples: 100 download_size: 2257437892 dataset_size: 11511573599 --- # Dataset Card for "pg19" Paraquet version of [pg19](https://huggingface.co/datasets/pg19) Statistics (in # of characters): `total_len: 11425076324, average_len: 399450.2595622684`
result-kand2-sdxl-wuerst-karlo/6a3f723d
2023-10-03T08:48:05.000Z
[ "region:us" ]
result-kand2-sdxl-wuerst-karlo
null
null
null
0
175
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 162 num_examples: 10 download_size: 1317 dataset_size: 162 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "6a3f723d" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iohadrubin/smcalflow
2022-01-01T20:57:52.000Z
[ "region:us" ]
iohadrubin
null
2
174
Entry not found
ranpox/xfund
2021-09-08T11:15:02.000Z
[ "region:us" ]
ranpox
null
null
null
3
174
Entry not found
JeremyAlain/SLF5K
2023-01-24T14:21:35.000Z
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "feedback", "human feedback", "language feedback", "binary feedback", "reward", "reward model", "gpt3", "gpt-3", "instructgpt", "alignment", "ai alignment", "scale", "imitation learning from language feedback", "ilf", "arxiv:2009.01325", "region:us" ]
JeremyAlain
The Summarization with Language Feedback (SLF5K) dataset is an English-language dataset containing 5K unique samples that can be used for the task of abstraction summarization. Each sample consists of a Reddit title and post, a model-generated (FeedME) summary, and human-written language feedback on that summary. Additionally, each sample has a high-quality, human-written (gold) summary that should be ideal for the Reddit post. Lastly, each sample has two additional model-generated summaries with binary human preference labels, on which summary is preferred by a human. The dataset can be used to train language models with language feedback on abstractive summarization. It can also be used to train a reward model on binary preferences.
@article{ }
null
4
174
--- annotations_creators: - expert-generated language: - en language_creators: - found license: apache-2.0 multilinguality: - monolingual pretty_name: SLF5K size_categories: - 1K<n<10K source_datasets: - original tags: - feedback - human feedback - language feedback - binary feedback - reward - reward model - gpt3 - gpt-3 - instructgpt - alignment - ai alignment - scale - imitation learning from language feedback - ilf task_categories: - summarization task_ids: [] --- # Dataset Card for SLF5K ## Dataset Description - **Repository: https://github.com/JeremyAlain/imitation_learning_from_language_feedback** - **Paper: Training Language Models with Language Feedback at Scale** - **Point of Contact: jeremy.scheurer@nyu.edu and ethan@anthropic.com** ### Dataset Summary The Summarization with Language Feedback (SLF5K) dataset is an English-language dataset containing 5K unique samples that can be used for the task of abstraction summarization. Each sample consists of a Reddit title and post, a model-generated ([FeedME](https://beta.openai.com/docs/model-index-for-researchers)) summary, and human-written language feedback on that summary. Additionally, each sample has a high-quality, human-written (gold) summary that should be ideal for the Reddit post. Lastly, each sample has two additional model-generated summaries with binary human preference labels, on which summary is preferred by a human. The dataset can be used to train language models with language feedback on abstractive summarization. It can also be used to train a reward model on binary preferences. The Reddit posts were taken from the datasets provided by [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf), who used the initial Reddit post dataset [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf). ### Supported Tasks and Leaderboards The dataset can be used to train a model for abstractive and extractive summarization. It can either be trained directly on human-written summaries, or leverage language feedback or binary human preferences. The model performance is evaluated in a human evaluation, where annotators rate the quality of the generated summaries. Previous work has used [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) scores, but in [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf) they show that ROUGE is not an ideal metric. ### Languages English ## Dataset Structure ### Data Instances Each instance is a line in the dataset file (which is saved as .jsonl). Each instance contains various fields, where the most important are Here is an example instance: ``` {"id":"t3_3w7gyp", "subreddit":"dogs", "title":"Puppy playing at park - other owner aggressive towards him [help]", "post":"Hi all, looking for some advice. I have a 6m old kelpie, buzz, who goes with me daily to a dog park, [...]", "tldr_human_reference_summary":"other owner at park harsh with my dog for playing to rough with his. Have tried talking to him about it, hasn't helped.", "summary_prompt":"Write an excellent summary of the given text.\n\nTitle: Puppy playing at park - other owner aggressive towards him [help]\n\nText: Hi all, looking for some advice. [...] that too.\n\nTL;DR:", "generated_summary_for_comparison_A":"New dog at park is being aggressive to my pup, owner won't stop. What do I do?", "generated_summary_for_comparison_B":"A new dog has been coming to the dog park and the first day the new dog came, the old dog (a kelpie) was all over him.", "generated_summary_for_feedback":"A new dog has been coming to the dog park and the first day the owner hauled buzz off and whacked him. Today, the owner was staring daggers at me and lunging at buzz\/pulling his collar roughly.", "comparison_preference":"Summary A", "feedback":"The summary is concise but could include information about the poster knowing the dogs are just playing and will react if they become aggressive and wants to know how to handle things with Max's dad. ", "feedback_class":"Coverage", "has_additional_feedback":"No", "ideal_human_summary":"The poster is frustrated with a new person at the dog park who is upset with him because their young dogs are playing roughly. The poster will step in if it gets aggressive and wants the new person to understand this. "} ``` There are some additional fields like `time_spent_in_seconds_ideal_human_summary`, `time_spent_in_seconds_feedback`,`time_spent_in_seconds_comparison` which only have values for the development dataset. ### Data Fields - `id`: a unique string identifying the reddit post. - `subreddit`: subreddit of the post. - `title`: title of the reddit post. - `post`: reddit post - `tldr_human_reference_summary`: human reference summary automatically extracted from reddit (taken from the dataset of [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf)) - `summary_prompt`: the whole prompt used to generate summaries - `generated_summary_for_comparison_A`: summary A used for binary human comparison (generated with FeedME) - `generated_summary_for_comparison_B`: summary B used for binary human comparison (generated with FeedME) - `generated_summary_for_feedback`: summary used to gather human language feedback ((generated with FeedME)) - `comparison_preference`: prefered Summary of human comparison, Values: "Summary A", "Summary B" - `feedback`: human language feedback on `generated_summary_for_feedback`(most important feedback point) - `feedback_class`: Class of language feedback, Values: "Coverage", "Accuracy", "Coherence", "other" - `has_additional_feedback`: Whether this sample could use more feedback on an important point. - `ideal_human_summary`: high-quality human-written summary for this sample. We instructed annotators to write an ideal summary. - `time_spent_in_seconds_ideal_human_summary`: Annotation time for ideal human summary - `time_spent_in_seconds_feedback`: Annotation time for language feedback - `time_spent_in_seconds_comparison`: Annotation time for binary comparison Note that the various datasplits have varying fields. The fields that are not contained in a dataset have the value None. ### Data Splits The SLF5K dataset has 4 splits: _train_, _development_, _validation_, and _test_. Below are the statistics of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 5000 | | Development | 200 | | Validation | 500 | | Test | 698 | The reason we introduce a development and validation dataset, is the following. ## Dataset Creation ### Curation Rationale This dataset aims to support supervised language model training from human preferences on a summarization task with real natural training data. ### Source Data #### Initial Data Collection and Normalization The initial TL;DR dataset was made public by Völkse et. al. in the paper [TL;DR: Mining Reddit to Learn Automatic Summarization](https://aclanthology.org/W17-4508.pdf) (licensed under CC By 4.0). Stiennon et. al. then use this TL;DR dataset for their work [Learning to Summarize from Human Feedbback](https://arxiv.org/pdf/2009.01325.pdf). They filter the TL;DR dataset for quality reasons and collect binary human preference labels. Our datset is a subset from Stiennon et. al. Dataset, which can be downloaded [here](https://github.com/openai/summarize-from-feedback). Our train and development dataset are taken form their train dataset and our test and validation datasets are taken from their test datasest. #### Who are the source language producers? The reddit posts are written by users of reddit.com. ### Annotations #### Annotation process We first onboarded annotators by giving them test tasks on which we evaluated their annotation quality. We then selected 31 annotators for the remainder of the project (a few were removed later on due to quality issues). Througout the process we updated our instructions to make the tasks clearer and stayed in close contact with the annotators to answer questions etc. The various dataset splits were collected in multiple annotation iterations. The largest annotation was a single iteration of annotation 5000 samples for the train dataset. #### Who are the annotators? We used annotators through the annotation service [Surge AI](https://www.surgehq.ai/). ### Personal and Sensitive Information The annotators were completely anonymized and no information about them can be found in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to align language models with human preferences by leveraging language feedback, on the task of summarization. Concretely, the goal is to to develop models that produce summaries for reddit posts that are more in line with human preferences. Note that this does not imply that the outputs will perfectly be aligned with human values, i.e. outputs can still be misaligned, offensive and contain harumful biases. While outputs from a model trained on our dataset may reflect the language of the reddit posts, summaries, and human feedback, it should always be made clear that such an output is automatically generated. ### Discussion of Biases The TL;DR dataset consists of user-submitted posts to the website reddit.com. It can thus contain content that is offensive or reflects harmful social biases. We thus recommend that models trained on the SLF5K dataset (which is based on the TL;DR) dataset be thoroughly studied for potential harmful behavior. The human preferences and feedback represented in this dataset were collected through crowd-workers and may disproportionally represent the views, biases, and values of the respective demographic of the annotators. ### Other Known Limitations The "human-summaries" collected in the TL;DR dataset (and available in the SLF5K dataset under the field `tldr_human_reference_summary`, were automatically extracted from reddit.com. They are often of poor quality and do not accurately reflect human summarization performance. In our paper, we show that our human written summaries (available in the SLF5K dataset under the field `ideal_human_summary`) are of much higher quality. ## Additional Information ### Dataset Curators The data is collected by Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. All authors are affiliated with New York University. Additionally, Jérémy Scheurer is affiliated with FAR AI. Jon Ander is affiliated with the University of the Basque Country. Tomek Korbak is affiliated with FAR AI and the University of Sussesx. Kyunghyun Cho is affiliated with Genentech and CIFAR LMB. Ethan Perez is affiliated with FAR AI and Anthropic. ### Licensing Information The SLF5K dataset is released under the Apache 2.0 license. ### Citation Information TBD
emozilla/govreport-test-tokenized
2023-08-09T02:35:24.000Z
[ "region:us" ]
emozilla
null
null
null
0
174
--- dataset_info: features: - name: id dtype: string - name: pid dtype: string - name: input dtype: string - name: output dtype: string - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: tokenized_len dtype: int64 splits: - name: test num_bytes: 107857269 num_examples: 973 download_size: 43840982 dataset_size: 107857269 --- # Dataset Card for "govreport-test-tokenized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ura-hcmut/MATH
2023-09-29T17:19:11.000Z
[ "task_categories:text2text-generation", "language:vi", "license:cc-by-nc-sa-4.0", "region:us" ]
ura-hcmut
null
null
null
0
174
--- license: cc-by-nc-sa-4.0 task_categories: - text2text-generation language: - vi configs: - config_name: gcp data_files: - split: train path: "MATH_gcp_training.csv" - split: test path: "MATH_gcp.csv" - config_name: azr data_files: - split: train path: "MATH_azr_training.csv" - split: test path: "MATH_azr.csv" --- # MATH dataset Original version: https://huggingface.co/datasets/lighteval/MATH Translation source code: https://github.com/martinakaduc/ura-llama/tree/main/dataset_scripts/custom_datasets
conceptual_12m
2022-11-03T16:31:22.000Z
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:other", "arxiv:2102.08981", "region:us" ]
null
Conceptual 12M is a large-scale dataset of 12 million image-text pairs specifically meant to be used for visionand-language pre-training. Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M.
@inproceedings{changpinyo2021cc12m, title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts}, author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu}, booktitle = {CVPR}, year = {2021}, }
null
10
173
--- annotations_creators: - found language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - image-to-text task_ids: - image-captioning paperswithcode_id: cc12m pretty_name: Conceptual 12M dataset_info: features: - name: image_url dtype: string - name: caption dtype: string splits: - name: train num_bytes: 2794168030 num_examples: 12423374 download_size: 2707204412 dataset_size: 2794168030 --- # Dataset Card for Conceptual 12M ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m) - **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981) - **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com) ### Dataset Summary Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training. Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("conceptual_12m") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` ### Supported Tasks and Leaderboards - `image-captioning`: This dataset can be used to train model for the Image Captioning task. ### Languages All captions are in English. ## Dataset Structure ### Data Instances Each instance represents a single image with a caption: ``` { 'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800', 'caption': 'a very typical bus station' } ``` ### Data Fields - `image_url`: Static URL for downloading the image associated with the post. - `caption`: Textual description of the image. ### Data Splits There is only training data, with a total of 12423374 rows ## Dataset Creation ### Curation Rationale Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps. ### Source Data #### Initial Data Collection and Normalization From the paper: > To arrive at CC12M, we keep the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2. We still keep only JPEG images with size greater than 400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text between 3 and 256 words in the alt-text. We still discard candidates with no noun or no determiner, but permit ones without prepositions. We discard the heuristics regarding high unique-word ratio covering various POS tags and word capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the above relaxations, the threshold for counting a word type as rare is increased from 5 to 20 > The main motivation for CC3M to perform text transformation is that a majority of candidate captions contain ultrafine-grained entities such as proper names (people, venues, locations, etc.), making it extremely difficult to learn as part of the image captioning task. In contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability. We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. #### Who are the source language producers? Not specified. ### Annotations #### Annotation process Annotations are extracted jointly with the images using the automatic pipeline. #### Who are the annotators? Not specified. ### Personal and Sensitive Information From the paper: > The only exception to the “keep alt-texts as raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy of the individuals in these images. For this step, we use the Google Cloud Natural Language APIs to detect all named entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M are transformed in this fashion. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut. ### Licensing Information The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset. ### Citation Information ```bibtex @inproceedings{changpinyo2021cc12m, title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts}, author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu}, booktitle = {CVPR}, year = {2021}, } ``` ### Contributions Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
florentgbelidji/car-reviews
2022-06-08T16:43:39.000Z
[ "region:us" ]
florentgbelidji
null
null
null
0
173
Entry not found
tomekkorbak/detoxify-pile-chunk3-50000-100000
2022-10-06T02:59:17.000Z
[ "region:us" ]
tomekkorbak
null
null
null
0
173
Entry not found
Jean-Baptiste/financial_news_sentiment
2022-12-29T03:14:44.000Z
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
Jean-Baptiste
null
null
null
6
173
--- language: - en dataset_info: splits: - name: test num_examples: 267 - name: train num_examples: 1512 annotations_creators: - expert-generated license: - mit multilinguality: - monolingual pretty_name: financial_news_sentiment size_categories: - 1K<n<10K tags: [] task_categories: - text-classification task_ids: - multi-class-classification - sentiment-classification --- # Dataset Card for "financial_news_sentiment" Manually validated sentiment for ~2000 Canadian news articles. The dataset also include a column topic which contains one of the following value: * acquisition * other * quaterly financial release * appointment to new position * dividend * corporate update * drillings results * conference * share repurchase program * grant of stocks This was generated automatically using a zero-shot classification model and **was not** reviewed manually.
Tuana/presidents
2023-02-28T01:06:47.000Z
[ "region:us" ]
Tuana
null
null
null
1
173
--- dataset_info: features: - name: id dtype: string - name: content dtype: string - name: content_type dtype: string - name: meta struct: - name: url dtype: string - name: _split_id dtype: int64 - name: id_hash_keys sequence: string - name: score dtype: 'null' - name: embedding dtype: 'null' splits: - name: train num_bytes: 9366886 num_examples: 5529 download_size: 4997888 dataset_size: 9366886 --- # Dataset Card for "presidents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
casehold/casehold
2023-10-04T19:55:29.000Z
[ "region:us" ]
casehold
CaseHOLD (Case Holdings On Legal Decisions) is a law dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case.
@inproceedings{zhengguha2021, title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset}, author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho}, year={2021}, eprint={2104.08671}, archivePrefix={arXiv}, primaryClass={cs.CL}, booktitle={Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law}, publisher={Association for Computing Machinery} }
null
4
173
Entry not found
aboonaji/alpaca_micro_demo
2023-08-08T13:57:18.000Z
[ "region:us" ]
aboonaji
null
null
null
0
173
Entry not found
ziozzang/EverythingLM-data-V2-Ko
2023-08-23T07:03:47.000Z
[ "language:ko", "license:mit", "region:us" ]
ziozzang
null
null
null
7
173
--- license: mit language: - ko --- # Translated into Korean with DeepL All Texts are translated with DeepL. (Machine Translated.) - Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand. - This is sample/test processing of data set creation with DeepL. - Original Dataset: totally-not-an-llm/EverythingLM-data-V2 # EverythingLM V2 Dataset **EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions. ### Differences for V1: - All data in V2 is generated by GPT4 - Higher quality dataset generation pipeline: - More humalike seed prompts - Fixed some bugs in the script - More diverse creative writing - More diverse seed prompts in general - Attempt not to overfit the model on complex instructions by occasionally skipping evol ### Cost: Reproducing this dataset would cost roughly $40. ### Instruction Categories: - Reasoning - Creative Writing - General Knowledge - Brainstorming - Search Query - Coding - Basic Instruct We also leverage various system prompts for evol-instruct and for responding to prompts. This dataset has also been filtered to remove OpenAI alignment. ### How it stands out: - Long, detailed outputs - Humanlike creativity - CoT reasoning - Complex & challenging tasks ### Plans: - Train Llama 7b & 13b models (13b model V1 trained) - Train Llama 70b QLoRA - Generate V2 of the dataset, with more categories and GPT-4 (DONE) ✓ Included in this repo is the script to generate the dataset.
vlsp-2023-vllm/arithmetic_vi
2023-09-19T03:54:17.000Z
[ "arxiv:2005.14165", "region:us" ]
vlsp-2023-vllm
null
null
null
0
173
--- dataset_info: features: - name: context dtype: string - name: completion dtype: string - name: meta dtype: string splits: - name: test num_bytes: 1729595 num_examples: 26000 download_size: 515170 dataset_size: 1729595 --- # Arithmetic (OpenAI) Source: https://github.com/openai/gpt-3 Vietnamese version of Arithmetic. ## Citation Information ``` @article{brown2020language, title={Language Models are Few-Shot Learners}, author={Tom B. Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and Sandhini Agarwal and Ariel Herbert-Voss and Gretchen Krueger and Tom Henighan and Rewon Child and Aditya Ramesh and Daniel M. Ziegler and Jeffrey Wu and Clemens Winter and Christopher Hesse and Mark Chen and Eric Sigler and Mateusz Litwin and Scott Gray and Benjamin Chess and Jack Clark and Christopher Berner and Sam McCandlish and Alec Radford and Ilya Sutskever and Dario Amodei}, year={2020}, eprint={2005.14165}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
NamCyan/Evol-TheVault
2023-09-15T18:04:10.000Z
[ "region:us" ]
NamCyan
null
null
null
0
173
--- dataset_info: features: - name: id dtype: int64 - name: instruction dtype: string - name: code dtype: string - name: tokenized_instruction sequence: string - name: type dtype: string - name: language dtype: string splits: - name: train num_bytes: 175466743 num_examples: 47797 download_size: 55571461 dataset_size: 175466743 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "Evol-TheVault" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
great_code
2022-11-18T20:05:00.000Z
[ "task_categories:table-to-text", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
null
The dataset for the variable-misuse task, described in the ICLR 2020 paper 'Global Relational Models of Source Code' [https://openreview.net/forum?id=B1lnbRNtwr] This is the public version of the dataset used in that paper. The original, used to produce the graphs in the paper, could not be open-sourced due to licensing issues. See the public associated code repository [https://github.com/VHellendoorn/ICLR20-Great] for results produced from this dataset. This dataset was generated synthetically from the corpus of Python code in the ETH Py150 Open dataset [https://github.com/google-research-datasets/eth_py150_open].
@inproceedings{DBLP:conf/iclr/HellendoornSSMB20, author = {Vincent J. Hellendoorn and Charles Sutton and Rishabh Singh and Petros Maniatis and David Bieber}, title = {Global Relational Models of Source Code}, booktitle = {8th International Conference on Learning Representations, {ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020}, publisher = {OpenReview.net}, year = {2020}, url = {https://openreview.net/forum?id=B1lnbRNtwr}, timestamp = {Thu, 07 May 2020 17:11:47 +0200}, biburl = {https://dblp.org/rec/conf/iclr/HellendoornSSMB20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
1
172
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - table-to-text task_ids: [] paperswithcode_id: null pretty_name: GREAT dataset_info: features: - name: id dtype: int32 - name: source_tokens sequence: string - name: has_bug dtype: bool - name: error_location dtype: int32 - name: repair_candidates sequence: string - name: bug_kind dtype: int32 - name: bug_kind_name dtype: string - name: repair_targets sequence: int32 - name: edges list: list: - name: before_index dtype: int32 - name: after_index dtype: int32 - name: edge_type dtype: int32 - name: edge_type_name dtype: string - name: provenances list: - name: datasetProvenance struct: - name: datasetName dtype: string - name: filepath dtype: string - name: license dtype: string - name: note dtype: string splits: - name: train num_bytes: 14705534822 num_examples: 1798742 - name: validation num_bytes: 1502956919 num_examples: 185656 - name: test num_bytes: 7880762248 num_examples: 968592 download_size: 23310374002 dataset_size: 24089253989 --- # Dataset Card for GREAT ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/google-research-datasets/great - **Paper:** https://openreview.net/forum?id=B1lnbRNtwr - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
flexthink/librig2p-nostress-space
2022-06-24T01:23:49.000Z
[ "region:us" ]
flexthink
Grapheme-to-Phoneme training, validation and test sets
null
null
0
172
# librig2p-nostress - Grapheme-To-Phoneme Dataset This dataset contains samples that can be used to train a Grapheme-to-Phoneme system **without** stress information. The dataset is derived from the following pre-existing datasets: * [LibriSpeech ASR Corpus](https://www.openslr.org/12) * [LibriSpeech Alignments](https://github.com/CorentinJ/librispeech-alignments) * [Wikipedia Homograph Disambiguation Data](https://github.com/google/WikipediaHomographData)
eloukas/edgar-corpus
2023-07-14T07:17:12.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:en", "license:apache-2.0", "research papers", "edgar", "sec", "finance", "financial", "filings", "10K", "10-K", "nlp", "research", "econlp", "economics", "business", "arxiv:2109.14394", "region:us" ]
eloukas
The dataset contains annual filings (10K) of all publicly traded firms from 1993-2020. The table data is stripped but all text is retained. This dataset allows easy access to the EDGAR-CORPUS dataset based on the paper EDGAR-CORPUS: Billions of Tokens Make The World Go Round (See References in README.md for details).
null
null
16
172
--- dataset_info: - config_name: . features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 40306320885 num_examples: 220375 download_size: 10734208660 dataset_size: 40306320885 - config_name: full features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 32237457024 num_examples: 176289 - name: validation num_bytes: 4023129683 num_examples: 22050 - name: test num_bytes: 4045734178 num_examples: 22036 download_size: 40699852536 dataset_size: 40306320885 - config_name: year_1993 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 112714537 num_examples: 1060 - name: validation num_bytes: 13584432 num_examples: 133 - name: test num_bytes: 14520566 num_examples: 133 download_size: 141862572 dataset_size: 140819535 - config_name: year_1994 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 198955093 num_examples: 2083 - name: validation num_bytes: 23432307 num_examples: 261 - name: test num_bytes: 26115768 num_examples: 260 download_size: 250411041 dataset_size: 248503168 - config_name: year_1995 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 356959049 num_examples: 4110 - name: validation num_bytes: 42781161 num_examples: 514 - name: test num_bytes: 45275568 num_examples: 514 download_size: 448617549 dataset_size: 445015778 - config_name: year_1996 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 738506135 num_examples: 7589 - name: validation num_bytes: 89873905 num_examples: 949 - name: test num_bytes: 91248882 num_examples: 949 download_size: 926536700 dataset_size: 919628922 - config_name: year_1997 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 854201733 num_examples: 8084 - name: validation num_bytes: 103167272 num_examples: 1011 - name: test num_bytes: 106843950 num_examples: 1011 download_size: 1071898139 dataset_size: 1064212955 - config_name: year_1998 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 904075497 num_examples: 8040 - name: validation num_bytes: 112630658 num_examples: 1006 - name: test num_bytes: 113308750 num_examples: 1005 download_size: 1137887615 dataset_size: 1130014905 - config_name: year_1999 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 911374885 num_examples: 7864 - name: validation num_bytes: 118614261 num_examples: 984 - name: test num_bytes: 116706581 num_examples: 983 download_size: 1154736765 dataset_size: 1146695727 - config_name: year_2000 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 926444625 num_examples: 7589 - name: validation num_bytes: 113264749 num_examples: 949 - name: test num_bytes: 114605470 num_examples: 949 download_size: 1162526814 dataset_size: 1154314844 - config_name: year_2001 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 964631161 num_examples: 7181 - name: validation num_bytes: 117509010 num_examples: 898 - name: test num_bytes: 116141097 num_examples: 898 download_size: 1207790205 dataset_size: 1198281268 - config_name: year_2002 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1049271720 num_examples: 6636 - name: validation num_bytes: 128339491 num_examples: 830 - name: test num_bytes: 128444184 num_examples: 829 download_size: 1317817728 dataset_size: 1306055395 - config_name: year_2003 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1027557690 num_examples: 6672 - name: validation num_bytes: 126684704 num_examples: 834 - name: test num_bytes: 130672979 num_examples: 834 download_size: 1297227566 dataset_size: 1284915373 - config_name: year_2004 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1129657843 num_examples: 7111 - name: validation num_bytes: 147499772 num_examples: 889 - name: test num_bytes: 147890092 num_examples: 889 download_size: 1439663100 dataset_size: 1425047707 - config_name: year_2005 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1200714441 num_examples: 7113 - name: validation num_bytes: 161003977 num_examples: 890 - name: test num_bytes: 160727195 num_examples: 889 download_size: 1538876195 dataset_size: 1522445613 - config_name: year_2006 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1286566049 num_examples: 7064 - name: validation num_bytes: 160843494 num_examples: 883 - name: test num_bytes: 163270601 num_examples: 883 download_size: 1628452618 dataset_size: 1610680144 - config_name: year_2007 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1296737173 num_examples: 6683 - name: validation num_bytes: 166735560 num_examples: 836 - name: test num_bytes: 156399535 num_examples: 835 download_size: 1637502176 dataset_size: 1619872268 - config_name: year_2008 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1525698198 num_examples: 7408 - name: validation num_bytes: 190034435 num_examples: 927 - name: test num_bytes: 187659976 num_examples: 926 download_size: 1924164839 dataset_size: 1903392609 - config_name: year_2009 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1547816260 num_examples: 7336 - name: validation num_bytes: 188897783 num_examples: 917 - name: test num_bytes: 196463897 num_examples: 917 download_size: 1954076983 dataset_size: 1933177940 - config_name: year_2010 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1493505900 num_examples: 7013 - name: validation num_bytes: 192695567 num_examples: 877 - name: test num_bytes: 191482640 num_examples: 877 download_size: 1897687327 dataset_size: 1877684107 - config_name: year_2011 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1481486551 num_examples: 6724 - name: validation num_bytes: 190781558 num_examples: 841 - name: test num_bytes: 185869151 num_examples: 840 download_size: 1877396421 dataset_size: 1858137260 - config_name: year_2012 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1463496224 num_examples: 6479 - name: validation num_bytes: 186247306 num_examples: 810 - name: test num_bytes: 185923601 num_examples: 810 download_size: 1854377191 dataset_size: 1835667131 - config_name: year_2013 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1468172419 num_examples: 6372 - name: validation num_bytes: 183570866 num_examples: 797 - name: test num_bytes: 182495750 num_examples: 796 download_size: 1852839009 dataset_size: 1834239035 - config_name: year_2014 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1499451593 num_examples: 6261 - name: validation num_bytes: 181568907 num_examples: 783 - name: test num_bytes: 181046535 num_examples: 783 download_size: 1880963095 dataset_size: 1862067035 - config_name: year_2015 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1472346721 num_examples: 6028 - name: validation num_bytes: 180128910 num_examples: 754 - name: test num_bytes: 189210252 num_examples: 753 download_size: 1860303134 dataset_size: 1841685883 - config_name: year_2016 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1471605426 num_examples: 5812 - name: validation num_bytes: 178310005 num_examples: 727 - name: test num_bytes: 177481471 num_examples: 727 download_size: 1845967492 dataset_size: 1827396902 - config_name: year_2017 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1459021126 num_examples: 5635 - name: validation num_bytes: 174360913 num_examples: 705 - name: test num_bytes: 184398250 num_examples: 704 download_size: 1836306408 dataset_size: 1817780289 - config_name: year_2018 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1433409319 num_examples: 5508 - name: validation num_bytes: 181466460 num_examples: 689 - name: test num_bytes: 182594965 num_examples: 688 download_size: 1815810567 dataset_size: 1797470744 - config_name: year_2019 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1421232269 num_examples: 5354 - name: validation num_bytes: 175603562 num_examples: 670 - name: test num_bytes: 176336174 num_examples: 669 download_size: 1791237155 dataset_size: 1773172005 - config_name: year_2020 features: - name: filename dtype: string - name: cik dtype: string - name: year dtype: string - name: section_1 dtype: string - name: section_1A dtype: string - name: section_1B dtype: string - name: section_2 dtype: string - name: section_3 dtype: string - name: section_4 dtype: string - name: section_5 dtype: string - name: section_6 dtype: string - name: section_7 dtype: string - name: section_7A dtype: string - name: section_8 dtype: string - name: section_9 dtype: string - name: section_9A dtype: string - name: section_9B dtype: string - name: section_10 dtype: string - name: section_11 dtype: string - name: section_12 dtype: string - name: section_13 dtype: string - name: section_14 dtype: string - name: section_15 dtype: string splits: - name: train num_bytes: 1541847387 num_examples: 5480 - name: validation num_bytes: 193498658 num_examples: 686 - name: test num_bytes: 192600298 num_examples: 685 download_size: 1946916132 dataset_size: 1927946343 annotations_creators: - no-annotation language: - en language_creators: - other license: - apache-2.0 multilinguality: - monolingual pretty_name: EDGAR-CORPUS (10-K Filings from 1999 to 2020) size_categories: - 100K<n<1M source_datasets: - extended|other tags: - research papers - edgar - sec - finance - financial - filings - 10K - 10-K - nlp - research - econlp - economics - business task_categories: - other task_ids: [] --- # Dataset Card for [EDGAR-CORPUS] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [References](#references) - [Contributions](#contributions) ## Dataset Description - **Point of Contact: Lefteris Loukas** ### Dataset Summary This dataset card is based on the paper **EDGAR-CORPUS: Billions of Tokens Make The World Go Round** authored by _Lefteris Loukas et.al_, as published in the _ECONLP 2021_ workshop. This dataset contains the annual reports of public companies from 1993-2020 from SEC EDGAR filings. There is supported functionality to load a specific year. Care: since this is a corpus dataset, different `train/val/test` splits do not have any special meaning. It's the default HF card format to have train/val/test splits. If you wish to load specific year(s) of specific companies, you probably want to use the open-source software which generated this dataset, EDGAR-CRAWLER: https://github.com/nlpaueb/edgar-crawler. ## Citation If this work helps or inspires you in any way, please consider citing the relevant paper published at the [3rd Economics and Natural Language Processing (ECONLP) workshop](https://lt3.ugent.be/econlp/) at EMNLP 2021 (Punta Cana, Dominican Republic): ``` @inproceedings{loukas-etal-2021-edgar, title = "{EDGAR}-{CORPUS}: Billions of Tokens Make The World Go Round", author = "Loukas, Lefteris and Fergadiotis, Manos and Androutsopoulos, Ion and Malakasiotis, Prodromos", booktitle = "Proceedings of the Third Workshop on Economics and Natural Language Processing", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.econlp-1.2", pages = "13--18", } ``` ### Supported Tasks This is a raw dataset/corpus for financial NLP. As such, there are no annotations or labels. ### Languages The EDGAR Filings are in English. ## Dataset Structure ### Data Instances Refer to the dataset preview. ### Data Fields **filename**: Name of file on EDGAR from which the report was extracted.<br> **cik**: EDGAR identifier for a firm.<br> **year**: Year of report.<br> **section_1**: Corressponding section of the Annual Report.<br> **section_1A**: Corressponding section of the Annual Report.<br> **section_1B**: Corressponding section of the Annual Report.<br> **section_2**: Corressponding section of the Annual Report.<br> **section_3**: Corressponding section of the Annual Report.<br> **section_4**: Corressponding section of the Annual Report.<br> **section_5**: Corressponding section of the Annual Report.<br> **section_6**: Corressponding section of the Annual Report.<br> **section_7**: Corressponding section of the Annual Report.<br> **section_7A**: Corressponding section of the Annual Report.<br> **section_8**: Corressponding section of the Annual Report.<br> **section_9**: Corressponding section of the Annual Report.<br> **section_9A**: Corressponding section of the Annual Report.<br> **section_9B**: Corressponding section of the Annual Report.<br> **section_10**: Corressponding section of the Annual Report.<br> **section_11**: Corressponding section of the Annual Report.<br> **section_12**: Corressponding section of the Annual Report.<br> **section_13**: Corressponding section of the Annual Report.<br> **section_14**: Corressponding section of the Annual Report.<br> **section_15**: Corressponding section of the Annual Report.<br> ```python import datasets # Load the entire dataset raw_dataset = datasets.load_dataset("eloukas/edgar-corpus", "full") # Load a specific year and split year_1993_training_dataset = datasets.load_dataset("eloukas/edgar-corpus", "year_1993", split="train") ``` ### Data Splits | Config | Training | Validation | Test | | --------- | -------- | ---------- | ------ | | full | 176,289 | 22,050 | 22,036 | | year_1993 | 1,060 | 133 | 133 | | year_1994 | 2,083 | 261 | 260 | | year_1995 | 4,110 | 514 | 514 | | year_1996 | 7,589 | 949 | 949 | | year_1997 | 8,084 | 1,011 | 1,011 | | year_1998 | 8,040 | 1,006 | 1,005 | | year_1999 | 7,864 | 984 | 983 | | year_2000 | 7,589 | 949 | 949 | | year_2001 | 7,181 | 898 | 898 | | year_2002 | 6,636 | 830 | 829 | | year_2003 | 6,672 | 834 | 834 | | year_2004 | 7,111 | 889 | 889 | | year_2005 | 7,113 | 890 | 889 | | year_2006 | 7,064 | 883 | 883 | | year_2007 | 6,683 | 836 | 835 | | year_2008 | 7,408 | 927 | 926 | | year_2009 | 7,336 | 917 | 917 | | year_2010 | 7,013 | 877 | 877 | | year_2011 | 6,724 | 841 | 840 | | year_2012 | 6,479 | 810 | 810 | | year_2013 | 6,372 | 797 | 796 | | year_2014 | 6,261 | 783 | 783 | | year_2015 | 6,028 | 754 | 753 | | year_2016 | 5,812 | 727 | 727 | | year_2017 | 5,635 | 705 | 704 | | year_2018 | 5,508 | 689 | 688 | | year_2019 | 5,354 | 670 | 669 | | year_2020 | 5,480 | 686 | 685 | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Initial data was collected and processed by the authors of the research paper **EDGAR-CORPUS: Billions of Tokens Make The World Go Round**. #### Who are the source language producers? Public firms filing with the SEC. ### Annotations #### Annotation process NA #### Who are the annotators? NA ### Personal and Sensitive Information The dataset contains public filings data from SEC. ## Considerations for Using the Data ### Social Impact of Dataset Low to none. ### Discussion of Biases The dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature. ### Other Known Limitations The dataset needs further cleaning for improved performance. ## Additional Information ### Licensing Information EDGAR data is publicly available. ### Shoutout Huge shoutout to [@JanosAudran](https://huggingface.co/JanosAudran) for the HF Card setup! ### References - [Research Paper] Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, and, Prodromos Malakasiotis. EDGAR-CORPUS: Billions of Tokens Make The World Go Round. Third Workshop on Economics and Natural Language Processing (ECONLP). https://arxiv.org/abs/2109.14394 - Punta Cana, Dominican Republic, November 2021. - [Software] Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, and, Prodromos Malakasiotis. EDGAR-CRAWLER. https://github.com/nlpaueb/edgar-crawler (2021) - [EDGAR CORPUS, but in zip files] EDGAR CORPUS: A corpus for financial NLP research, built from SEC's EDGAR. https://zenodo.org/record/5528490 (2021) - [Word Embeddings] EDGAR-W2V: Word2vec Embeddings trained on EDGAR-CORPUS. https://zenodo.org/record/5524358 (2021) - [Applied Research paper where EDGAR-CORPUS is used] Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos, and, George Paliouras. FiNER: Financial Numeric Entity Recognition for XBRL Tagging. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). https://doi.org/10.18653/v1/2022.acl-long.303 (2022)
commaai/commavq
2023-09-19T21:38:38.000Z
[ "size_categories:100K<n<1M", "license:mit", "region:us" ]
commaai
TODO
null
null
9
172
--- license: mit size_categories: - 100K<n<1M --- # commaVQ commaVQ is a dataset of 100,000 heavily compressed driving videos for Machine Learning research. A heavily compressed driving video like this is useful to experiment with GPT-like video prediction models. This repo includes an encoder/decoder and an example of a video prediction model. Examples and trained models can be found here: https://github.com/commaai/commavq # Overview A VQ-VAE [1,2] was used to heavily compress each frame into 128 "tokens" of 10 bits each. Each entry of the dataset is a "segment" of compressed driving video, i.e. 1min of frames at 20 FPS. Each file is of shape 1200x8x16 and saved as int16. Note that the compressor is extremely lossy on purpose. It makes the dataset smaller and easy to play with (train GPT with large context size, fast autoregressive generation, etc.). We might extend the dataset to a less lossy version when we see fit. <video title="source" controls> <source src="https://github.com/commaai/commavq/assets/29985433/91894bf7-592b-4204-b3f2-3e805984045c" type="video/mp4"> </video> <video title="compressed" controls> <source src="https://github.com/commaai/commavq/assets/29985433/3a799ac8-781e-461c-bf14-c15cea42b985" type="video/mp4"> </video> <video title="imagined" controls> <source src="https://github.com/commaai/commavq/assets/29985433/f6f7699b-b6cb-4f9c-80c9-8e00d75fbfae" type="video/mp4"> </video> # References [1] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017). [2] Esser, Patrick, Robin Rombach, and Bjorn Ommer. "Taming transformers for high-resolution image synthesis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
jason-lee08/TinyStoriesWithExclamationsSmall
2023-08-20T03:52:44.000Z
[ "region:us" ]
jason-lee08
null
null
null
0
172
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 23826331 num_examples: 21197 - name: validation num_bytes: 236180 num_examples: 220 download_size: 8127925 dataset_size: 24062511 --- # Dataset Card for "TinyStoriesWithExclamationsSmall" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
catalonia_independence
2023-06-01T14:59:47.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "language:es", "license:cc-by-nc-sa-4.0", "stance-detection", "region:us" ]
null
This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
@inproceedings{zotova-etal-2020-multilingual, title = "Multilingual Stance Detection in Tweets: The {C}atalonia Independence Corpus", author = "Zotova, Elena and Agerri, Rodrigo and Nunez, Manuel and Rigau, German", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.171", pages = "1368--1375", abstract = "Stance detection aims to determine the attitude of a given text with respect to a specific topic or claim. While stance detection has been fairly well researched in the last years, most the work has been focused on English. This is mainly due to the relative lack of annotated data in other languages. The TW-10 referendum Dataset released at IberEval 2018 is a previous effort to provide multilingual stance-annotated data in Catalan and Spanish. Unfortunately, the TW-10 Catalan subset is extremely imbalanced. This paper addresses these issues by presenting a new multilingual dataset for stance detection in Twitter for the Catalan and Spanish languages, with the aim of facilitating research on stance detection in multilingual and cross-lingual settings. The dataset is annotated with stance towards one topic, namely, the ndependence of Catalonia. We also provide a semi-automatic method to annotate the dataset based on a categorization of Twitter users. We experiment on the new corpus with a number of supervised approaches, including linear classifiers and deep learning methods. Comparison of our new corpus with the with the TW-1O dataset shows both the benefits and potential of a well balanced corpus for multilingual and cross-lingual research on stance detection. Finally, we establish new state-of-the-art results on the TW-10 dataset, both for Catalan and Spanish.", language = "English", ISBN = "979-10-95546-34-4", }
null
1
171
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - ca - es license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: cic pretty_name: Catalonia Independence Corpus tags: - stance-detection dataset_info: - config_name: catalan features: - name: id_str dtype: string - name: TWEET dtype: string - name: LABEL dtype: class_label: names: '0': AGAINST '1': FAVOR '2': NEUTRAL splits: - name: train num_bytes: 1406250 num_examples: 6028 - name: test num_bytes: 469204 num_examples: 2010 - name: validation num_bytes: 473393 num_examples: 2010 download_size: 995415 dataset_size: 2348847 - config_name: spanish features: - name: id_str dtype: string - name: TWEET dtype: string - name: LABEL dtype: class_label: names: '0': AGAINST '1': FAVOR '2': NEUTRAL splits: - name: train num_bytes: 1507388 num_examples: 6046 - name: test num_bytes: 501783 num_examples: 2016 - name: validation num_bytes: 505092 num_examples: 2015 download_size: 1070281 dataset_size: 2514263 config_names: - catalan - spanish --- # Dataset Card for Catalonia Independence Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Repository:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Paper:** [Multilingual Stance Detection: The Catalonia Independence Corpus](https://www.aclweb.org/anthology/2020.lrec-1.171/) - **Leaderboard:** - **Point of Contact:** [Rodrigo Agerri](https://github.com/ragerri) (corpus creator) ### Dataset Summary This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Spanish and Catalan ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.