author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
naem1023
null
null
null
false
1
false
naem1023/augmented-namuwiki
2022-07-25T12:45:56.000Z
null
false
9dfd7de60214c80b44de6a81dda0902c12675511
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/naem1023/augmented-namuwiki/resolve/main/README.md
--- license: apache-2.0 ---
naem1023
null
null
null
false
1
false
naem1023/augmented-kowiki
2022-07-25T13:10:57.000Z
null
false
0364e2cff990bd0fc2e78d963f2d263e9e645a91
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/naem1023/augmented-kowiki/resolve/main/README.md
--- license: apache-2.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-ebf1ec50-11735562
2022-07-25T11:37:39.000Z
null
false
1439a395520ae8c2068bad1e1b07b8d5f052b9be
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-ebf1ec50-11735562/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/deberta-v3-base-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745564
2022-07-25T11:40:25.000Z
null
false
09d0cf6b8b8cf1c47c25270219270ee5b2207921
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745564/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/tinyroberta-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/tinyroberta-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745565
2022-07-25T11:42:15.000Z
null
false
55b56822e4f31bfb149e822c0004ad25ad90fb94
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745565/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/roberta-large-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-large-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745563
2022-07-25T11:47:52.000Z
null
false
15a9bdb8362664a48997e28994c2baf46eaa71f2
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745563/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/deberta-v3-base-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-26568076-11755566
2022-07-25T11:53:57.000Z
null
false
87c34d7017c665a0bb76b416bcfb62bfe17a2ae6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-26568076-11755566/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/bert-large-uncased-whole-word-masking-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-large-uncased-whole-word-masking-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-df3d9ae8-11765567
2022-07-25T12:13:39.000Z
null
false
de7be88799fc7659e1e51edbcf4a85f37d249e05
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-df3d9ae8-11765567/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/deberta-v3-large-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-large-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MichelBartels](https://huggingface.co/MichelBartels) for evaluating this model.
naem1023
null
null
null
false
1
false
naem1023/augmented-concat-100000
2022-07-25T14:30:47.000Z
null
false
6d87e9c21e26481e929329cd82ed9d27c1e7da26
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/naem1023/augmented-concat-100000/resolve/main/README.md
--- license: apache-2.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-conll2003-2dc2f6d8-11805572
2022-07-25T14:27:10.000Z
null
false
9d347362dc8663670ef1512728cdaccf282ef29b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:conll2003" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-conll2003-2dc2f6d8-11805572/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - conll2003 eval_info: task: entity_extraction model: AJGP/bert-finetuned-ner metrics: [] dataset_name: conll2003 dataset_config: conll2003 dataset_split: test col_mapping: tokens: tokens tags: ner_tags --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: AJGP/bert-finetuned-ner * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@hrezaeim](https://huggingface.co/hrezaeim) for evaluating this model.
hdamghanian
null
null
null
false
1
false
hdamghanian/Stock-QA-fa
2022-07-25T15:16:43.000Z
null
false
249e666291cd556d0c0c7967ee3cb6967d77b56c
[]
[ "license:mit" ]
https://huggingface.co/datasets/hdamghanian/Stock-QA-fa/resolve/main/README.md
--- license: mit --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards This dataset is to be served as a reference for QA tasks. ### Languages Persian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale All annotations are done according to the SQuAD2.0 data format. ### Source Data #### Initial Data Collection and Normalization All context and some of questions are retrieved from [Faradars Introductory Course to Stock Market](https://blog.faradars.org/%d8%a2%d9%85%d9%88%d8%b2%d8%b4-%d8%a8%d9%88%d8%b1%d8%b3-%d8%b1%d8%a7%db%8c%da%af%d8%a7%d9%86/). #### Who are the source language producers? Persian (farsi) ### Annotations #### Annotation process All annotations are done via Deepset Haystack annotation tool. #### Who are the annotators? Hesam Damghanian (this HF account) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed]
cakiki
null
null
null
false
1
false
cakiki/ASE_runs
2022-07-25T18:11:20.000Z
null
false
e56574b7a9a0bb7c86f71d6350a3a3a5e68646b1
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/cakiki/ASE_runs/resolve/main/README.md
--- license: apache-2.0 ---
SocialGrep
null
null
Lite version of our Reddit /r/Bitcoin dataset - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.
false
38
false
SocialGrep/reddit-r-bitcoin-data-for-jun-2022
2022-07-25T18:22:16.000Z
null
false
3038d9967602ee1ba85340246bcd49bb52fd3bef
[]
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original" ]
https://huggingface.co/datasets/SocialGrep/reddit-r-bitcoin-data-for-jun-2022/resolve/main/README.md
--- annotations_creators: - lexyr language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original --- # Dataset Card for reddit-r-bitcoin-data-for-jun-2022 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/reddit-r-bitcoin-data-for-jun-2022?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) - **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) ### Dataset Summary Lite version of our premium [Reddit /r/Bitcoin dataset](https://socialgrep.com/datasets/the-reddit-r-bitcoin-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
anzorq
null
null
null
false
1
false
anzorq/kbd_lat-835k_ru-3M
2022-07-25T23:26:41.000Z
null
false
24f194c5b6ef29eb784ea3508a022ba848beeea4
[]
[ "license:unknown" ]
https://huggingface.co/datasets/anzorq/kbd_lat-835k_ru-3M/resolve/main/README.md
--- license: unknown --- Kbd latin script: 835k lines from a scraped pile ru: 3M lines from Wiki (OPUS)
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825575
2022-07-25T22:33:19.000Z
null
false
71d820d52a2662dd708036a15374bbbd68ff57b9
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:adversarial_qa" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825575/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - adversarial_qa eval_info: task: extractive_question_answering model: deepset/deberta-v3-large-squad2 metrics: [] dataset_name: adversarial_qa dataset_config: adversarialQA dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-large-squad2 * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model.
autoevaluate
null
null
null
false
7
false
autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825576
2022-07-25T22:32:36.000Z
null
false
e53695a23047a407d2999206a03fc82701148a78
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:adversarial_qa" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825576/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - adversarial_qa eval_info: task: extractive_question_answering model: deepset/bert-large-uncased-whole-word-masking-squad2 metrics: [] dataset_name: adversarial_qa dataset_config: adversarialQA dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-large-uncased-whole-word-masking-squad2 * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825574
2022-07-25T22:39:49.000Z
null
false
2803c28d2003a2afff2a01b409ff7cd42fb0fb17
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:adversarial_qa" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825574/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - adversarial_qa eval_info: task: extractive_question_answering model: mbartolo/electra-large-synqa metrics: [] dataset_name: adversarial_qa dataset_config: adversarialQA dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: mbartolo/electra-large-synqa * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835577
2022-07-25T22:38:58.000Z
null
false
4deeac719a3ff3df9b5866646f38a35bc45e3c0b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835577/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: mbartolo/roberta-large-synqa metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: mbartolo/roberta-large-synqa * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835578
2022-07-25T22:39:01.000Z
null
false
34c8374562d8b0e8846c1b926bbe84f4aef4dca5
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835578/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: mbartolo/electra-large-synqa metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: mbartolo/electra-large-synqa * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835579
2022-07-25T22:39:01.000Z
null
false
f329c4e36fa98c42ab3d616e01018048364d47e2
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835579/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: deepset/roberta-large-squad2 metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-large-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835580
2022-07-25T22:39:44.000Z
null
false
29dd17d42866336178ac700cbb45bce287a38a34
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835580/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: deepset/deberta-v3-base-squad2 metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845582
2022-07-25T23:20:32.000Z
null
false
cdfeb9020eb204c2b5b4e28ac3ef7b18a658cb76
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:adversarial_qa" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845582/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - adversarial_qa eval_info: task: extractive_question_answering model: mbartolo/roberta-large-synqa-ext metrics: [] dataset_name: adversarial_qa dataset_config: adversarialQA dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: mbartolo/roberta-large-synqa-ext * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845581
2022-07-25T23:26:00.000Z
null
false
8082af326609adb4497e5770cb5c05824349d0ef
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:adversarial_qa" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845581/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - adversarial_qa eval_info: task: extractive_question_answering model: mbartolo/roberta-large-synqa metrics: [] dataset_name: adversarial_qa dataset_config: adversarialQA dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: mbartolo/roberta-large-synqa * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model.
ghazalehk
null
null
null
false
1
false
ghazalehk/propara
2022-07-26T00:43:05.000Z
null
false
858940b5f0599852c1a5e3d86261b4b701a61779
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ghazalehk/propara/resolve/main/README.md
--- license: apache-2.0 ---
nateraw
null
null
null
false
1
false
nateraw/snares
2022-07-26T01:48:47.000Z
null
false
dc842ba21530077e2385514585f385e671eb9f32
[]
[ "language:en", "license:other" ]
https://huggingface.co/datasets/nateraw/snares/resolve/main/README.md
--- language: en license: other --- # Snares FSD50K subset of just snares. ``` wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.csv wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.zip unzip snares.zip ``` If you unpack as described above, `snares.csv` will have correct filepath to audio file when loaded in as CSV. Here we show with pandas... ```python import pandas as pd df = pd.read_csv('snares.csv') ```
betterme
null
null
null
false
1
false
betterme/goldendata
2022-07-26T02:05:07.000Z
null
false
e35dab202174c0845fc172f35bfcada0a31bafa4
[]
[ "license:mit" ]
https://huggingface.co/datasets/betterme/goldendata/resolve/main/README.md
--- license: mit ---
Kamrani
null
null
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
false
3
false
Kamrani/en-fa-translation
2022-07-30T04:13:38.000Z
null
false
cdeb4ea38252c283f5717b007ae8f8d5c5d3c73f
[]
[]
https://huggingface.co/datasets/Kamrani/en-fa-translation/resolve/main/README.md
annotations_creators: - no-annotation language: - en - fa language_creators: - crowdsourced license: - other multilinguality: - multilingual pretty_name: en-fa-translation size_categories: - 1M<n<10M source_datasets: - original tags: [] task_categories: - translation task_ids: []
puffy310
null
null
null
false
1
false
puffy310/SimulaPrompts
2022-07-26T04:52:28.000Z
null
false
b0a347020b300ca529aef033c85d00f5ae42aa3e
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/puffy310/SimulaPrompts/resolve/main/README.md
--- license: apache-2.0 ---
muibk
null
@inproceedings{ma-etal-2019-results, title = {Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges}, author = {Ma, Qingsong and Wei, Johnny and Bojar, Ondřej and Graham, Yvette}, booktitle = {Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)}, month = {aug}, year = {2019}, address = {Florence, Italy}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/W19-5302}, doi = {10.18653/v1/W19-5302}, pages = {62--90} }
This shared task will examine automatic evaluation metrics for machine translation. We will provide you with all of the translations produced in the translation task along with the human reference translations. You will return your automatic metric scores for translations at the system-level and/or at the sentence-level. We will calculate the system-level and sentence-level correlations of your scores with WMT19 human judgements once the manual evaluation has been completed.
false
1
false
muibk/wmt19_metrics_task
2022-07-26T10:06:23.000Z
null
false
8078e27c5ff4c52d5b85572ed45d36c712a3c423
[]
[ "annotations_creators:expert-generated", "language_creators:found", "language_creators:machine-generated", "language_creators:expert-generated", "language:de-cs", "language:de-en", "language:de-fr", "language:en-cs", "language:en-de", "language:en-fi", "language:en-gu", "language:en-kk", "la...
https://huggingface.co/datasets/muibk/wmt19_metrics_task/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - found - machine-generated - expert-generated language: - de-cs - de-en - de-fr - en-cs - en-de - en-fi - en-gu - en-kk - en-lt - en-ru - en-zh - fi-en - fr-de - gu-en - kk-en - lt-en - ru-en - zh-en license: - unknown multilinguality: - translation paperswithcode_id: null pretty_name: WMT19 Metrics Shared Task size_categories: - 100K<n<1M source_datasets: [] task_categories: - translation task_ids: [] --- # Dataset Card for WMT19 Metrics Task ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT19 Metrics Shared Task](https://www.statmt.org/wmt19/metrics-task.html) - **Repository:** [MT Metrics Eval Github Repository](https://github.com/google-research/mt-metrics-eval) - **Paper:** [Paper](https://aclanthology.org/W19-5302/) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset comprises the following language pairs: - de-cs - de-en - de-fr - en-cs - en-de - en-fi - en-gu - en-kk - en-lt - en-ru - en-zh - fi-en - fr-de - gu-en - kk-en - lt-en - ru-en - zh-en ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/mustaszewski) for adding this dataset.
Achen
null
null
null
false
1
false
Achen/large-test
2022-07-27T02:39:08.000Z
null
false
38e21f35965ee9f983839b416930aa3429ac60a5
[]
[ "license:bsd-2-clause" ]
https://huggingface.co/datasets/Achen/large-test/resolve/main/README.md
--- license: bsd-2-clause ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855583
2022-07-26T08:20:48.000Z
null
false
53514cd2b8c3bccdde0a61348e5ef76d3a6748a6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855583/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/electra-base-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/electra-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855584
2022-07-26T08:20:20.000Z
null
false
696f9a3028e982a43e69283dab450a4be0e0f72e
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855584/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/tinybert-6l-768d-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/tinybert-6l-768d-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855585
2022-07-26T08:20:49.000Z
null
false
55b89a23287e3762d16ad2ed49412c4dbb00d49a
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855585/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/bert-base-uncased-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-base-uncased-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855586
2022-07-26T08:20:27.000Z
null
false
e5c897042bb83fe95d7f687c51d48ed06f2b55a2
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855586/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/bert-medium-squad2-distilled metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-medium-squad2-distilled * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
autoevaluate
null
null
null
false
7
false
autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855587
2022-07-26T08:21:10.000Z
null
false
425e4ccec0605e663e762c5a088dcc5c6884329b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855587/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/xlm-roberta-base-squad2-distilled metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/xlm-roberta-base-squad2-distilled * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model.
jordane95
null
null
null
false
1
false
jordane95/msmarco-passage-with-query
2022-07-26T08:25:24.000Z
null
false
a3eaedbf1567d2f8b47042e495063ebfdd3d4b3f
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/jordane95/msmarco-passage-with-query/resolve/main/README.md
--- license: afl-3.0 ---
jordane95
null
@misc{bajaj2018ms, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang}, year={2018}, eprint={1611.09268}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
false
1
false
jordane95/msmarco-passage-corpus-with-query
2022-07-27T02:02:45.000Z
null
false
c68710dd4a69eca02807018e4e93a4211b68c86a
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/jordane95/msmarco-passage-corpus-with-query/resolve/main/README.md
--- license: afl-3.0 ---
mingz
null
null
null
false
1
false
mingz/demo
2022-07-26T10:55:21.000Z
null
false
2da7b5117419f7bd56e09b02442fed0c5c2e934a
[]
[]
https://huggingface.co/datasets/mingz/demo/resolve/main/README.md
asparius
null
null
null
false
36
false
asparius/demirtas-movie
2022-07-26T11:56:21.000Z
null
false
b9a7ed6dcfa2236fcfd4cc28fd129f5642ddf89d
[]
[ "license:mit" ]
https://huggingface.co/datasets/asparius/demirtas-movie/resolve/main/README.md
--- license: mit ---
frogvre
null
null
null
false
1
false
frogvre/lgo1
2022-07-26T14:20:16.000Z
null
false
dc2f783401fca4dc3c2fd7dd2b3b54892ca65332
[]
[ "license:unknown" ]
https://huggingface.co/datasets/frogvre/lgo1/resolve/main/README.md
--- license: unknown ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875589
2022-07-26T14:40:30.000Z
null
false
0d81b9869910b53d9fac2bddf8d3e2eb2afe8a50
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:deepset/germanquad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875589/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - deepset/germanquad eval_info: task: extractive_question_answering model: deepset/gelectra-base-germanquad metrics: [] dataset_name: deepset/germanquad dataset_config: plain_text dataset_split: test col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/gelectra-base-germanquad * Dataset: deepset/germanquad * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875590
2022-07-26T14:40:57.000Z
null
false
0b848ff3c9d5c4d515e9fea94415453bc756d489
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:deepset/germanquad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875590/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - deepset/germanquad eval_info: task: extractive_question_answering model: deepset/gelectra-large-germanquad metrics: [] dataset_name: deepset/germanquad dataset_config: plain_text dataset_split: test col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/gelectra-large-germanquad * Dataset: deepset/germanquad * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model.
Achen
null
null
null
false
1
false
Achen/voc-test
2022-07-27T03:24:17.000Z
null
false
8c73891571e4da2fe888c7e9ed21167402492e59
[]
[ "license:bsd" ]
https://huggingface.co/datasets/Achen/voc-test/resolve/main/README.md
--- license: bsd ---
frgfm
null
@software{Howard_Imagewoof_2019, title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify}, author={Jeremy Howard}, year={2019}, month={March}, publisher = {GitHub}, url = {https://github.com/fastai/imagenette#imagewoof} }
Imagewoof is a subset of 10 classes from Imagenet that aren't so easy to classify, since they're all dog breeds. The breeds are: Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu, English foxhound, Rhodesian ridgeback, Dingo, Golden retriever, Old English sheepdog.
false
124
false
frgfm/imagewoof
2022-07-27T19:47:31.000Z
imagewoof
false
a48b72c4a7735d5c18f580f009fa66fa330af8da
[]
[ "annotations_creators:crowdsourced", "language:en", "language_creators:crowdsourced", "license:apache-2.0", "size_categories:1K<n<10K", "source_datasets:extended", "task_categories:image-classification", "task_ids:image-classification" ]
https://huggingface.co/datasets/frgfm/imagewoof/resolve/main/README.md
--- annotations_creators: - crowdsourced language: - en language_creators: - crowdsourced license: - apache-2.0 multilinguality: [] pretty_name: Imagewoof size_categories: - 1K<n<10K source_datasets: - extended task_categories: - image-classification task_ids: - image-classification paperswithcode_id: imagewoof --- # Dataset Card for Imagewoof ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/fastai/imagenette#imagewoof - **Repository:** https://github.com/fastai/imagenette - **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof ### Dataset Summary A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds. This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset. ### Supported Tasks and Leaderboards - `image-classification`: The dataset can be used to train a model for Image Classification. ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances A data point comprises an image URL and its classification label. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>, 'label': 'Beagle', } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. - `label`: the expected class label of the image. ### Data Splits | |train|validation| |---------|----:|---------:| |imagewoof| 9025| 3929| ## Dataset Creation ### Curation Rationale cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale ### Source Data #### Initial Data Collection and Normalization Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization). ### Annotations #### Annotation process cf. https://huggingface.co/datasets/imagenet-1k#annotation-process #### Who are the annotators? cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators ### Personal and Sensitive Information cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information ## Considerations for Using the Data ### Social Impact of Dataset cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset ### Discussion of Biases cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases ### Other Known Limitations cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations ## Additional Information ### Dataset Curators cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators and Jeremy Howard ### Licensing Information [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @software{Howard_Imagewoof_2019, title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify}, author={Jeremy Howard}, year={2019}, month={March}, publisher = {GitHub}, url = {https://github.com/fastai/imagenette#imagewoof} } ``` ### Contributions This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
robertmyers
null
null
null
false
1
false
robertmyers/convo_base
2022-07-26T15:56:10.000Z
null
false
3f80d82f04e37d40b5972c5fcc5bb0e7c7830e76
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/robertmyers/convo_base/resolve/main/README.md
--- license: afl-3.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-47db8743-11885591
2022-07-26T16:38:56.000Z
null
false
a0d9ca0b1c481c4e8b2100bb6eb0457559e3f508
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-47db8743-11885591/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: Graphcore/roberta-base-squad metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Graphcore/roberta-base-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Narayana](https://huggingface.co/Narayana) for evaluating this model.
naem1023
null
null
null
false
1
false
naem1023/final_aug_2000
2022-07-26T17:52:12.000Z
null
false
7c7d48c7cf5047d41d499131f6e3e5d57fc8abe5
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/naem1023/final_aug_2000/resolve/main/README.md
--- license: afl-3.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895592
2022-07-26T18:52:05.000Z
null
false
2eb12757b146d9c1fbfda4e8f8d4a10c520de326
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895592/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sshleifer/distilbart-cnn-12-6 metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895594
2022-07-26T18:47:15.000Z
null
false
9acbc0b433d326333ebec9838d2cfd3dd96e4a6c
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895594/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: philschmid/distilbart-cnn-12-6-samsum metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: philschmid/distilbart-cnn-12-6-samsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895593
2022-07-26T18:34:27.000Z
null
false
09f0a5fb1b4b7bb1b18dac3c50ceeeaae00969fe
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895593/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sshleifer/distilbart-cnn-6-6 metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-6-6 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
biglam
null
null
null
false
1
false
biglam/berlin_state_library_ocr
2022-08-05T09:36:24.000Z
null
false
a890935d5bd754ddc5b85f56b6f34f6d2bb4abba
[]
[ "annotations_creators:machine-generated", "language:de", "language:nl", "language:en", "language:fr", "language:es", "language_creators:expert-generated", "license:cc-by-4.0", "multilinguality:multilingual", "size_categories:1M<n<10M", "tags:ocr", "tags:library", "task_categories:fill-mask",...
https://huggingface.co/datasets/biglam/berlin_state_library_ocr/resolve/main/README.md
--- annotations_creators: - machine-generated language: - de - nl - en - fr - es language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - multilingual pretty_name: Berlin State Library OCR size_categories: - 1M<n<10M source_datasets: [] tags: - ocr - library task_categories: - fill-mask - text-generation task_ids: - masked-language-modeling - language-modeling --- # Dataset Card for Berlin State Library OCR data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary > The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945. > At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages. For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012). ### Supported Tasks and Leaderboards - `language-modeling`: this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data. - ### Languages The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data. The frequency of the top ten languages in the dataset is shown below: | | frequency | |----|------------------| | de | 3.20963e+06 | | nl | 491322 | | en | 473496 | | fr | 216210 | | es | 68869 | | lb | 33625 | | la | 27397 | | pl | 17458 | | it | 16012 | | zh | 11971 | [More Information Needed] ## Dataset Structure ### Data Instances Each example represents a single page of OCR'd text. A single example of the dataset is as follows: ```python {'aut': 'Doré, Henri', 'date': '1912', 'file name': '00000218.xml', 'language': 'fr', 'language_confidence': 1.0, 'place': 'Chang-hai', 'ppn': '646426230', 'publisher': 'Imprimerie de la Mission Catholique', 'text': "— 338 — Cela fait, on enterre la statuette qu’on vient d’outrager, atten dant la réalisation sur la personne elle-même. C’est l’outrage en effigie. Un deuxième moyen, c’est de représenter l’Esprit Vengeur sous la figure d’un fier-à-bras, armé d’un sabre, ou d’une pique, et de lui confier tout le soin de sa vengeance. On multiplie les incantations et les offrandes en son honneur, pour le porter au paroxysme de la fureur, et inspirer à l’Esprit malin l’idée de l’exécution de ses désirs : en un mot, on fait tout pour faire passer en son cœur la rage de vengeance qui consume le sien propre. C’est une invention diabolique imaginée pour assouvir sa haine sur l’ennemi qu’on a en horreur. Ailleurs, ce n’est qu’une figurine en bois ou en papier, qui est lancée contre l’ennemi; elle se dissimule, ou prend des formes fantastiques pour acomplir son œuvre de vengeance. Qu’on se rappelle la panique qui régna dans la ville de Nan- king ifâ ffl, et ailleurs, l’année où de méchantes gens répandirent le bruit que des hommes de papier volaient en l’air et coupaient les tresses de cheveux des Chinois. Ce fut une véritable terreur, tous étaient affolés, et il y eut à cette occasion de vrais actes de sauvagerie. Voir historiettes sur les envoûtements : Wieger Folk-Lore, N os 50, 128, 157, 158, 159. Corollaire. Les Tao-niu jift fx ou femmes “ Tao-clie'’. A cette super stition peut se rapporter la pratique des magiciennes du Kiang- sou ■n: m, dans les environs de Chang-hai ± m, par exemple. Ces femmes portent constamment avec- elles une statue réputée merveilleuse : elle n’a que quatre ou cinq pouces de hauteur ordinairement. A force de prières, d’incantations, elles finissent par la rendre illuminée, vivante et parlante, ou plutôt piaillarde, car elle ne répond que par des petits cris aigus et répétés aux demandes qu’on lui adressé; elle paraît comme animée, sautille,", 'title': 'Les pratiques superstitieuses', 'wc': [1.0, 0.7266666889, 1.0, 0.9950000048, 0.7059999704, 0.5799999833, 0.7142857313, 0.7250000238, 0.9855555296, 0.6880000234, 0.7099999785, 0.7054545283, 1.0, 0.8125, 0.7950000167, 0.5681818128, 0.5500000119, 0.7900000215, 0.7662500143, 0.8830000162, 0.9359999895, 0.7411110997, 0.7950000167, 0.7962499857, 0.6949999928, 0.8937500119, 0.6299999952, 0.8820000291, 1.0, 0.6781818271, 0.7649999857, 0.437142849, 1.0, 1.0, 0.7416666746, 0.6474999785, 0.8166666627, 0.6825000048, 0.75, 0.7033333182, 0.7599999905, 0.7639999986, 0.7516666651, 1.0, 1.0, 0.5466666818, 0.7571428418, 0.8450000286, 1.0, 0.9350000024, 1.0, 1.0, 0.7099999785, 0.7250000238, 0.8588888645, 0.8366666436, 0.7966666818, 1.0, 0.9066666961, 0.7288888693, 1.0, 0.8333333135, 0.8787500262, 0.6949999928, 0.8849999905, 0.5816666484, 0.5899999738, 0.7922222018, 1.0, 1.0, 0.6657142639, 0.8650000095, 0.7674999833, 0.6000000238, 0.9737499952, 0.8140000105, 0.978333354, 1.0, 0.7799999714, 0.6650000215, 1.0, 0.823333323, 1.0, 0.9599999785, 0.6349999905, 1.0, 0.9599999785, 0.6025000215, 0.8525000215, 0.4875000119, 0.675999999, 0.8833333254, 0.6650000215, 0.7566666603, 0.6200000048, 0.5049999952, 0.4524999857, 1.0, 0.7711111307, 0.6666666865, 0.7128571272, 1.0, 0.8700000048, 0.6728571653, 1.0, 0.6800000072, 0.6499999762, 0.8259999752, 0.7662500143, 0.6725000143, 0.8362500072, 1.0, 0.6600000262, 0.6299999952, 0.6825000048, 0.7220000029, 1.0, 1.0, 0.6587499976, 0.6822222471, 1.0, 0.8339999914, 0.6449999809, 0.7062500119, 0.9150000215, 0.8824999928, 0.6700000167, 0.7250000238, 0.8285714388, 0.5400000215, 1.0, 0.7966666818, 0.7350000143, 0.6188889146, 0.6499999762, 1.0, 0.7459999919, 0.5799999833, 0.7480000257, 1.0, 0.9333333373, 0.790833354, 0.5550000072, 0.6700000167, 0.7766666412, 0.8280000091, 0.7250000238, 0.8669999838, 0.5899999738, 1.0, 0.7562500238, 1.0, 0.7799999714, 0.8500000238, 0.4819999933, 0.9350000024, 1.0, 0.8399999738, 0.7950000167, 1.0, 0.9474999905, 0.453333348, 0.6575000286, 0.9399999976, 0.6733333468, 0.8042857051, 0.7599999905, 1.0, 0.7355555296, 0.6499999762, 0.7118181586, 1.0, 0.621999979, 0.7200000286, 1.0, 0.853333354, 0.6650000215, 0.75, 0.7787500024, 1.0, 0.8840000033, 1.0, 0.851111114, 1.0, 0.9142857194, 1.0, 0.8899999857, 1.0, 0.9024999738, 1.0, 0.6166666746, 0.7533333302, 0.7766666412, 0.6637499928, 1.0, 0.8471428752, 0.7012500167, 0.6600000262, 0.8199999928, 1.0, 0.7766666412, 0.3899999857, 0.7960000038, 0.8050000072, 1.0, 0.8000000119, 0.7620000243, 1.0, 0.7163636088, 0.5699999928, 0.8849999905, 0.6166666746, 0.8799999952, 0.9058333039, 1.0, 0.6866666675, 0.7810000181, 0.3400000036, 0.2599999905, 0.6333333254, 0.6524999738, 0.4875000119, 0.7425000072, 0.75, 0.6863636374, 1.0, 0.8742856979, 0.137500003, 0.2099999934, 0.4199999869, 0.8216666579, 1.0, 0.7563636303, 0.3000000119, 0.8579999804, 0.6679999828, 0.7099999785, 0.7875000238, 0.9499999881, 0.5799999833, 0.9150000215, 0.6600000262, 0.8066666722, 0.729090929, 0.6999999881, 0.7400000095, 0.8066666722, 0.2866666615, 0.6700000167, 0.9225000143, 1.0, 0.7599999905, 0.75, 0.6899999976, 0.3600000143, 0.224999994, 0.5799999833, 0.8874999881, 1.0, 0.8066666722, 0.8985714316, 0.8827272654, 0.8460000157, 0.8880000114, 0.9533333182, 0.7966666818, 0.75, 0.8941666484, 1.0, 0.8450000286, 0.8666666746, 0.9533333182, 0.5883333087, 0.5799999833, 0.6549999714, 0.8600000143, 1.0, 0.7585714459, 0.7114285827, 1.0, 0.8519999981, 0.7250000238, 0.7437499762, 0.6639999747, 0.8939999938, 0.8877778053, 0.7300000191, 1.0, 0.8766666651, 0.8019999862, 0.8928571343, 1.0, 0.853333354, 0.5049999952, 0.5416666865, 0.7963636518, 0.5600000024, 0.8774999976, 0.6299999952, 0.5749999881, 0.8199999928, 0.7766666412, 1.0, 0.9850000143, 0.5674999952, 0.6240000129, 1.0, 0.9485714436, 1.0, 0.8174999952, 0.7919999957, 0.6266666651, 0.7887499928, 0.7825000286, 0.5366666913, 0.65200001, 0.832857132, 0.7488889098]} ``` ### Data Fields - 'file name': filename of the original XML file - 'text': OCR'd text for that page of the item - 'wc': the word confidence for each token predicted by the OCR engine - 'ppn': 'Pica production numbers' an internal ID used by the library. See [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2702544.svg)](https://doi.org/10.5281/zenodo.2702544) for more details. 'language': language predicted by `langid.py` (see above for more details) -'language_confidence': confidence score given by `langid.py` - publisher: publisher of the item in which the text appears - place: place of publication of the item in which the text appears - date: date of the item in which the text appears - title: title of the item in which the text appears - aut: author of the item in which the text appears [More Information Needed] ### Data Splits This dataset contains only a single split `train`. ## Dataset Creation The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library. The [dataprep.ipynb](https://huggingface.co/datasets/biglam/berlin_state_library_ocr/blob/main/dataprep.ipynb) was used to create this dataset. To make the dataset more useful for training language models, the following steps were carried out: - the CSV `xml2csv_alto.csv`, which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the `datasets` library - this CSV was augmented with language information from `corpus-language.pkl` **note** some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information - the CSV was further augmented by trying to map the PPN to fields in a metadata download created using [https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py](https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py). **note** not all examples are successfully matched to this metadata download. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process This dataset contains machine-produced annotations for: - the confidence scores the OCR engines used to produce the full-text materials. - the predicted languages and associated confidence scores produced by `langid.py` The dataset also contains metadata for the following fields: - author - publisher - the place of publication - title #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information This dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals. [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Initial data created by: Labusch, Kai; Zellhöfer, David ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{labusch_kai_2019_3257041, author = {Labusch, Kai and Zellhöfer, David}, title = {{OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)}}, month = jun, year = 2019, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3257041}, url = {https://doi.org/10.5281/zenodo.3257041} } ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
tarteel-ai
null
@article{malhas2020ayatec, author = {Malhas, Rana and Elsayed, Tamer}, title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an}, year = {2020}, issue_date = {November 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {19}, number = {6}, issn = {2375-4699}, url = {https://doi.org/10.1145/3400396}, doi = {10.1145/3400396}, journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.}, month = {oct}, articleno = {78}, numpages = {21}, keywords = {evaluation, Classical Arabic} }
The absence of publicly available reusable test collections for Arabic question answering on the Holy Qur’an has impeded the possibility of fairly comparing the performance of systems in that domain. In this article, we introduce AyaTEC, a reusable test collection for verse-based question answering on the Holy Qur’an, which serves as a common experimental testbed for this task. AyaTEC includes 207 questions (with their corresponding 1,762 answers) covering 11 topic categories of the Holy Qur’an that target the information needs of both curious and skeptical users. To the best of our effort, the answers to the questions (each represented as a sequence of verses) in AyaTEC were exhaustive—that is, all qur’anic verses that directly answered the questions were exhaustively extracted and annotated. To facilitate the use of AyaTEC in evaluating the systems designed for that task, we propose several evaluation measures to support the different types of questions and the nature of verse-based answers while integrating the concept of partial matching of answers in the evaluation.
false
1
false
tarteel-ai/quranqa
2022-07-27T02:28:31.000Z
null
false
88b10b40e3197c83f2995771e057515f584ecd27
[]
[ "annotations_creators:expert-generated", "language:ar", "language_creators:expert-generated", "license:cc-by-nd-4.0", "multilinguality:monolingual", "size_categories:n<1K", "size_categories:1K<n<10K", "source_datasets:original", "tags:quran", "tags:qa", "task_categories:question-answering", "t...
https://huggingface.co/datasets/tarteel-ai/quranqa/resolve/main/README.md
--- annotations_creators: - expert-generated language: - ar language_creators: - expert-generated license: - cc-by-nd-4.0 multilinguality: - monolingual pretty_name: Qur'anic Reading Comprehension Dataset size_categories: - n<1K - 1K<n<10K source_datasets: - original tags: - quran - qa task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sites.google.com/view/quran-qa-2022/home - **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/ - **Paper:** https://dl.acm.org/doi/10.1145/3400396 - **Leaderboard:** - **Point of Contact:** @piraka9011 ### Dataset Summary The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. ### Supported Tasks and Leaderboards This task is evaluated as a ranking task. To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure. It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching. pRR is the official evaluation measure of this shared task. We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer. The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the gold answers. Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer. To get an overall evaluation score, each of the above measures is averaged over all questions. ### Languages Qur'anic Arabic ## Dataset Structure ### Data Instances To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain one or more answers to that question, as shown below: ```json { "pq_id": "38:41-44_105", "passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.", "surah": 38, "verses": "41-44", "question": "من هو النبي المعروف بالصبر؟", "answers": [ { "text": "أيوب", "start_char": 12 } ] } ``` Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different question. Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a different Qur’anic passage. The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the Holy Qur'an in several scripting styles. We have chosen the simple-clean text style of Tanzil version 1.0.2. ### Data Fields * `pq_id`: Sample ID * `passage`: Context text * `surah`: Surah number * `verses`: Verse range * `question`: Question text * `answers`: List of answers and their start character ### Data Splits | **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** | |-------------|:-----:|:-----------------------------:|:---------------------------------------:| | Training | 65% | 710 | 861 | | Development | 10% | 109 | 128 | | Test | 25% | 274 | 348 | | All | 100% | 1,093 | 1,337 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/ ### Citation Information ``` @article{malhas2020ayatec, author = {Malhas, Rana and Elsayed, Tamer}, title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an}, year = {2020}, issue_date = {November 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {19}, number = {6}, issn = {2375-4699}, url = {https://doi.org/10.1145/3400396}, doi = {10.1145/3400396}, journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.}, month = {oct}, articleno = {78}, numpages = {21}, keywords = {evaluation, Classical Arabic} } ``` ### Contributions Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
BirdL
null
null
null
false
1
false
BirdL/DallData
2022-09-28T21:12:02.000Z
null
false
e7d4f3001b1c33740f10caa51c61cd4199e831e0
[]
[ "license:other", "size_categories:1K<n<10K", "task_categories:unconditional-image-generation" ]
https://huggingface.co/datasets/BirdL/DallData/resolve/main/README.md
--- annotations_creators: [] language: [] language_creators: [] license: - other multilinguality: [] pretty_name: DALL-E Latent Space Mapping size_categories: - 1K<n<10K source_datasets: [] tags: [] task_categories: - unconditional-image-generation task_ids: [] --- DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) (1) ```bibtext @misc{Dayma_DALL·E_Mini_2021, author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata}, doi = {10.5281/zenodo.5146400}, month = {7}, title = {DALL·E Mini}, url = {https://github.com/borisdayma/dalle-mini}, year = {2021} } ```
biglam
null
@misc{ContentiousContextsCorpus2021, author = {Cultural AI}, title = {Contentious Contexts Corpus}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/cultural-ai/ConConCor}}, }
This dataset contains extracts from historical Dutch newspapers which have been containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
false
1
false
biglam/contentious_contexts
2022-08-01T17:02:11.000Z
null
false
794edc666ccae9f296d033a99a826a3f41f34385
[]
[ "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language:nl", "language_creators:machine-generated", "license:cc-by-2.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "tags:newspapers", "tags:historic", "tags:dutch", "tag...
https://huggingface.co/datasets/biglam/contentious_contexts/resolve/main/README.md
--- annotations_creators: - expert-generated - crowdsourced language: - nl language_creators: - machine-generated license: - cc-by-2.0 multilinguality: - monolingual pretty_name: Contentious Contexts Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - newspapers - historic - dutch - problematic - ConConCor task_categories: - text-classification task_ids: - sentiment-scoring - multi-label-classification --- # Dataset Card for Contentious Contexts Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse) **Note** One can also find a Datasheet produced by the creators of this dataset as a [PDF document](https://github.com/cultural-ai/ConConCor/blob/main/Dataset/DataSheet.pdf) ### Dataset Summary This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time ### Supported Tasks and Leaderboards - `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time ### Languages The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl` ## Dataset Structure ### Data Instances ``` { 'extract_id': 'H97', 'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te scheppen.Intusschen is het', 'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c', 'annotator_responses_english': [ {'id': 'unknown_2a', 'response': 'Not contentious'}, {'id': 'unknown_2b', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2c', 'response': "I don't know"}, {'id': 'unknown_2d', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2e', 'response': 'Not contentious'}, {'id': 'unknown_2f', 'response': "I don't know"}, {'id': 'unknown_2g', 'response': 'Not contentious'}], 'annotator_responses_dutch': [ {'id': 'unknown_2a', 'response': 'Niet omstreden'}, {'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2c', 'response': 'Weet ik niet'}, {'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2e', 'response': 'Niet omstreden'}, {'id': 'unknown_2f', 'response': 'Weet ik niet'}, {'id': 'unknown_2g', 'response': 'Niet omstreden'}], 'annotator_suggestions': [ {'id': 'unknown_2a', 'suggestion': ''}, {'id': 'unknown_2b', 'suggestion': 'ander ras nodig'}, {'id': 'unknown_2c', 'suggestion': 'personen van ander ras'}, {'id': 'unknown_2d', 'suggestion': ''}, {'id': 'unknown_2e', 'suggestion': ''}, {'id': 'unknown_2f', 'suggestion': ''}, {'id': 'unknown_2g', 'suggestion': 'ras'}] } ``` ### Data Fields |extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions| |---|---|---|---|---|---| |Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present| ### Data Splits Train: 2720 ## Dataset Creation ### Curation Rationale > Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language". ### Source Data #### Initial Data Collection and Normalization > The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly. #### Who are the source language producers? [N/A] ### Annotations #### Annotation process > The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample > - 'Omstreden'(Contentious) > - 'Niet omstreden'(Not contentious) > - 'Weet ik niet'(I don't know) > - 'Onleesbare OCR'(Illegible OCR)</br> 2 open fields > - 'Andere omstreden termen in de context'(Other contentious terms in the context) > - 'Notities'(Notes)</br> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage: > - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase; > - The context window of 5 sentences per sample was found optimal; > - The number of samples per annotator was increased to 50; > - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective; > - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards); > - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments; > - Another open question was added at the end of the annotation asking how much time it took to complete the annotation. #### Who are the annotators? Volunteers and Expert annotators ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ## Accessing the annotations Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script. An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`: ```python from collections import Counter def calculate_ocr_score(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) bad_ocr_ratings = counts.get("Illegible OCR") if bad_ocr_ratings is None: bad_ocr_ratings = 0 return round(1 - bad_ocr_ratings/len(annotator_responses),3) dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)}) ``` To take the majority vote (or return a tie) based on whether a example is labelled contentious or not: ```python def most_common_vote(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) contentious_count = counts.get("Contentious according to current standards") if not contentious_count: contentious_count = 0 not_contentious_count = counts.get("Not contentious") if not not_contentious_count: not_contentious_count = 0 if contentious_count > not_contentious_count: return "contentious" if contentious_count < not_contentious_count: return "not_contentious" if contentious_count == not_contentious_count: return "tied" ``` ### Social Impact of Dataset This dataset can be used to see how words change in meaning over time ### Discussion of Biases > Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations. Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Cultural AI](https://github.com/cultural-ai) ### Licensing Information CC-BY ### Citation Information ``` @misc{ContentiousContextsCorpus2021, author = {Cultural AI}, title = {Contentious Contexts Corpus}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/cultural-ai/ConConCor}}, } ```
autoevaluate
null
null
null
false
7
false
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-9ce97676-11915596
2022-07-28T05:35:44.000Z
null
false
cde011e595294d34ae7c648fcf788b153e762256
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:kmfoda/booksum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-9ce97676-11915596/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - kmfoda/booksum eval_info: task: summarization model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 metrics: [] dataset_name: kmfoda/booksum dataset_config: kmfoda--booksum dataset_split: test col_mapping: text: chapter target: summary_text --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-3fbf83bf-11925597
2022-07-28T05:57:02.000Z
null
false
9ed5cb6a383d487c045f685388b32a12a5ad17c6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:kmfoda/booksum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-3fbf83bf-11925597/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - kmfoda/booksum eval_info: task: summarization model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 metrics: [] dataset_name: kmfoda/booksum dataset_config: kmfoda--booksum dataset_split: test col_mapping: text: chapter target: summary_text --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
anzorq
null
null
null
false
5
false
anzorq/kbd_lat-ru
2022-07-31T02:04:38.000Z
null
false
7103492b78e16adacd7b2a3216f524265ae3d70c
[]
[ "language:kbd", "language:ru", "license:mit", "tags:translation", "source_datasets:original", "multilinguality:multilingual", "task_categories:translation", "task_categories:text2text-generation", "task_ids:translation", "task_ids:text2text-generation" ]
https://huggingface.co/datasets/anzorq/kbd_lat-ru/resolve/main/README.md
--- language: - kbd - ru license: - mit tags: - translation pretty_name: Kbd Ru Translation source_datasets: - original multilinguality: - multilingual task_categories: - translation - text2text-generation task_ids: - translation - text2text-generation ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-billsum-18299d18-11955600
2022-07-27T10:17:44.000Z
null
false
0342260156e61cc56a6f59314d0d5b036b985a39
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:billsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-billsum-18299d18-11955600/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - billsum eval_info: task: summarization model: pszemraj/led-base-book-summary metrics: [] dataset_name: billsum dataset_config: default dataset_split: test col_mapping: text: text target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-base-book-summary * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-billsum-a6bd4aa5-11965601
2022-07-27T21:10:35.000Z
null
false
3ab156a12e3f1fecc0271712a0709c4ff979715f
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:billsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-billsum-a6bd4aa5-11965601/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - billsum eval_info: task: summarization model: pszemraj/long-t5-tglobal-base-16384-book-summary metrics: [] dataset_name: billsum dataset_config: default dataset_split: test col_mapping: text: text target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
hong
null
null
null
false
1
false
hong/FLO
2022-07-27T04:05:59.000Z
null
false
643d3b34887839055d1a1d41cee511eb2baaac31
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/hong/FLO/resolve/main/README.md
--- license: afl-3.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-96a02c9c-11975602
2022-07-27T10:27:23.000Z
null
false
eae636f52231308429ea7b022850ba84f4cfd02b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-96a02c9c-11975602/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: nlpconnect/roberta-base-squad2-nq metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/roberta-base-squad2-nq * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-ef91144d-11985603
2022-07-27T10:45:45.000Z
null
false
201d9a9e3d04b1bc66894808a1699731e3d45c0b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-ef91144d-11985603/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: nlpconnect/roberta-base-squad2-nq metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/roberta-base-squad2-nq * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model.
chintagunta85
null
@article{smith2008overview, title={Overview of BioCreative II gene mention recognition}, author={Smith, Larry and Tanabe, Lorraine K and nee Ando, Rie Johnson and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph M and Ganchev, Kuzman and others}, journal={Genome biology}, volume={9}, number={S2}, pages={S2}, year={2008}, publisher={Springer} }
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions. For more details, see: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/ The original dataset can be downloaded from: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-ii-corpus/ This dataset has been converted to CoNLL format for NER using the following tool: https://github.com/spyysalo/standoff2conll
false
1
false
chintagunta85/bc2gm_test
2022-07-28T14:16:43.000Z
null
false
e24270fa1657929a060d81dc258fee812b3905f6
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:token-classification", "task_ids:named-entity-recognition" ]
https://huggingface.co/datasets/chintagunta85/bc2gm_test/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: null pretty_name: Bc2GmCorpus --- # Dataset Card for bc2gm_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `id`: Sentence identifier. - `tokens`: Array of tokens composing a sentence. - `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
prubach
null
null
null
false
1
false
prubach/knotprotSequences
2022-07-27T14:59:51.000Z
null
false
3575c59559542b22c2fdebcbfeac364b9b9e017c
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/prubach/knotprotSequences/resolve/main/README.md
--- license: apache-2.0 ---
moyix
null
null
null
false
9
false
moyix/debian_csrc
2022-07-27T20:54:47.000Z
null
false
1bca1af003ec196c15d46b370ee4241b26918666
[]
[ "license:mit" ]
https://huggingface.co/datasets/moyix/debian_csrc/resolve/main/README.md
--- license: mit ---
benfoley
null
null
null
false
1
false
benfoley/test-dataset
2022-07-27T23:41:15.000Z
null
false
a125fdedddadfc82908c3000165134876eb6a090
[]
[]
https://huggingface.co/datasets/benfoley/test-dataset/resolve/main/README.md
testing an audio dataset
oisinoh
null
@ONLINE {beansdata, author="Makerere AI Lab", title="Bean disease dataset", month="January", year="2020", url="https://github.com/AI-Lab-Makerere/ibean/" }
Beans is a dataset of images of beans taken in the field using smartphone cameras. It consists of 3 classes: 2 disease classes and the healthy class. Diseases depicted include Angular Leaf Spot and Bean Rust. Data was annotated by experts from the National Crops Resources Research Institute (NaCRRI) in Uganda and collected by the Makerere AI research lab.
false
1
false
oisinoh/tomatos
2022-07-28T01:12:09.000Z
null
false
6af7a842f6fc38d0a5d963fd44deaf1681935819
[]
[]
https://huggingface.co/datasets/oisinoh/tomatos/resolve/main/README.md
--- viewer: true ---
commanderstrife
null
@inproceedings{kim2004introduction, title={Introduction to the bio-entity recognition task at JNLPBA}, author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel}, booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications}, pages={70--75}, year={2004}, organization={Citeseer} }
The data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search on MEDLINE using the MeSH terms human, blood cells and transcription factors. From this search 2,000 abstracts were selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification. Among the classes, 36 terminal classes were used to annotate the GENIA corpus.
false
1
false
commanderstrife/jnlpba
2022-07-28T06:46:36.000Z
null
false
6d7d0e843d195bae3df7338b261551080ed395f2
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/commanderstrife/jnlpba/resolve/main/README.md
--- license: apache-2.0 ---
hong
null
null
null
false
1
false
hong/zoosdataset
2022-07-28T05:21:23.000Z
null
false
4c31442562033cbc26c7f3d86e5236d082ea6799
[]
[]
https://huggingface.co/datasets/hong/zoosdataset/resolve/main/README.md
Slepp
null
null
null
false
1
false
Slepp/train
2022-07-28T08:18:50.000Z
null
false
586c8a9acf05865650594e634cb88ef3d4938136
[]
[]
https://huggingface.co/datasets/Slepp/train/resolve/main/README.md
for trainninf
Slepp
null
null
null
false
1
false
Slepp/validation
2022-07-28T08:01:43.000Z
null
false
f6f04d6b8f8df133c3aa570f81b395b0c99b9fe7
[]
[]
https://huggingface.co/datasets/Slepp/validation/resolve/main/README.md
validation set
actdan2016
null
null
null
false
1
false
actdan2016/sample1
2022-08-29T02:12:39.000Z
redcaps
false
09013b8be5f523de806f9c21c548d2d6e7d92a02
[]
[ "arxiv:2111.11431", "annotations_creators:found", "language_creators:found", "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "task_categories:image-to-text", "task_ids:image-captioning" ]
https://huggingface.co/datasets/actdan2016/sample1/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - image-to-text task_ids: - image-captioning paperswithcode_id: redcaps pretty_name: RedCaps --- # Dataset Card for RedCaps ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Information](#dataset-information) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Information - **Path** [/home/daniel.baek/public/common/Data](/home/daniel.baek/public/common/Data) - **Content type** image - **Tag** sensor, common, ai, dataset - **Description** - **Homepage:** [RedCaps homepage](https://redcaps.xyz/) - **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader) - **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431) - **Leaderboard:** - **Point of Contact:** [Karan Desai](mailto:kdexd@umich.edu) ### Dataset Summary RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset composition without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually unrelated images through a common semantic meaning (r/perfectfit). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` Some image links point to more than one image. You can process and downloaded those as follows: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import os import re import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 20 dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads}) ``` Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links. ### Supported Tasks and Leaderboards From the paper: > We have used our dataset to train deep neural networks that perform image captioning, and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). > We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis. ### Languages All of the subreddits in RedCaps use English as their primary language. ## Dataset Structure ### Data Instances Each instance in RedCaps represents a single Reddit image post: ``` { 'image_id': 'bpzj7r', 'author': 'djasz1', 'image_url': 'https://i.redd.it/ho0wntksivy21.jpg', 'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.', 'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3, 'score': 72, 'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41), 'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None } ``` ### Data Fields - `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit). - `author`: Reddit username of the image post author. - `image_url`: Static URL for downloading the image associated with the post. - `raw_caption`: Textual description of the image, written by the post author. - `caption`: Cleaned version of "raw_caption" by us (see Q35). - `subreddit`: Name of subreddit where the post was submitted. - `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost. - `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit. - `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>). - `crosspost_parents`: List of parent posts. This field is optional. ### Data Splits All the data is contained in training set. The training set has nearly 12M (12,011,111) instances. From the paper: > We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while the validation split is derived from downstream task(s). If users require a validation split, we recommend sampling it such that it follows the same subreddit distribution as entire dataset. ## Dataset Creation ### Curation Rationale From the paper: > Large datasets of image-text pairs are widely used for pre-training generic representations that transfer to a variety of downstream vision and vision-and-language tasks. Existing public datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is inefficient and diversity is artificially supressed. We argue that the quality of data depends on its source, and the human intent behind its creation. In this work, we explore Reddit – a social media platform, for curating high quality data. We introduce RedCaps – a large dataset of 12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection, better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation. ### Source Data #### Initial Data Collection and Normalization From the paper: > **Data Collection Pipeline** Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning. **Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits have their own rules, community norms, and moderators so curating subreddits allows us to steer the dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots, etc) and post titles tend to describe image content (rather than making jokes, political commentary, etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund), plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food (r/steak, r/macarons), scenery (r/cityporn1 , r/desertporn), or activities (r/carpentry, r/kayaking). In total we collect data from 350 subreddits; the full list can be found in Appendix A. **Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains: Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain multiple images (gallery posts) – in this case we only collect the first image and associate it with the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content. **Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following [29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets ((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc], image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram: @user], and other references (link in comments). Finally, like [31] we replace social media handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy. Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them, as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard captions without nouns or that don’t overlap image tags, we do not discard any instances in this step. Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is less resource-intensive than existing datasets – we do not require webpage crawlers, search engines, or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances. #### Who are the source language producers? Reddit is the singular data source for RedCaps. ### Annotations #### Annotation process The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators. #### Who are the annotators? The annotation process doesn't require any human annotators. ### Personal and Sensitive Information From the paper: > **Does the dataset relate to people?** The dataset pertains to people in that people wrote the captions and posted images to Reddit that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid large quantities of images containing people: (a) We collect data from manually curated subreddits in which most contain primarily pertains to animals, objects, places, or activities. We exclude all subreddits whose primary purpose is to share and describe images of people (such as celebrity photos or user selfies). (b) We use an off-the-shelf face detector to find and remove images with potential presence of human faces. We manually checked 50K random images in RedCaps (Q16) and found 79 images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images with identifiable people. Refer Section 2.2 in the main paper. > **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in combination with other data) from the dataset?** Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be used to look up the Reddit user profile, and some Reddit users may have identifying information in their profiles. Some images may contain human faces which could be identified by appearance. However, note that all this information is already public on Reddit, and searching it in RedCaps is no easier than searching directly on Reddit. > **Were the individuals in question notified about the data collection?** No. Reddit users are anonymous by default, and are not required to share their personal contact information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps image posts is by sending them private messages on Reddit. This is practically difficult to do manually, and will be classified as spam and blocked by Reddit if attempted to programmatically send a templated message to millions of users. > **Did the individuals in question consent to the collection and use of their data?** Users did not explicitly consent to the use of their data in our dataset. However, by uploading their data on Reddit, they consent that it would appear on the Reddit plaform and will be accessible via the official Reddit API (which we use to collect RedCaps). > **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?** Users have full control over the presence of their data in our dataset. If users wish to revoke their consent, they can delete the underlying Reddit post – it will be automatically removed dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request form on our dataset website for anybody to request removal of an individual instance if it is potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.). ## Considerations for Using the Data ### Social Impact of Dataset From the paper: > **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?** No. ### Discussion of Biases From the paper: > **Harmful Stereotypes**: Another concern with Reddit data is that images or language may represent harmful stereotypes about gender, race, or other characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35] whose training data includes at least 63K documents from banned or quarantined subreddits which may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways: > * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low precision (∼1%) – most detections are non-NSFW images with pink and beige hues. > * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels. > **Reddit demographics**: Reddit’s user demographics are not representative of the population at large. Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs 22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together, these demographic biases likely also bias the types of objects and places that appear in images on Reddit, and the language used to describe these images. We do not offer explicit countermeasures to these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51]. Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G. > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** The scale of RedCaps means that we are unable to verify the contents of all images and captions. However we have tried to minimize the possibility that RedCaps contains data that might be offensive, insulting, threatening, or might cause anxiety via the following mitigations: (a) We manually curate the set of subreddits from which to collect data; we only chose subreddits that are not marked NSFW and which generally contain non-offensive content. (b) Within our curated subreddits, we did not include any posts marked NSFW. (c) We removed all instances whose captions contained any of the 400 potentially offensive words or phrases. Refer Section 2.2 in the main paper. (d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector. We manually checked 50K random images in RedCaps and found one image containing nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper > **Does the dataset identify any subpopulations (e.g., by age, gender)?** RedCaps does not explicitly identify any subpopulations. Since some images contain people and captions are free-form natural language written by Reddit users, it is possible that some captions may identify people appearing in individual images as part of a subpopulation. > **Were any ethical review processes conducted (e.g., by an institutional review board)?** We did not conduct a formal ethical review process via institutional review boards. However, as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms to try and remove instances that could be problematic. ### Other Known Limitations From the paper: > **Are there any errors, sources of noise, or redundancies in the dataset?** RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured. Some instances may also have duplicate images and captions – Reddit users may have shared the same image post in multiple subreddits. Such redundancies constitute a very small fraction of the dataset, and should have almost no effect in training large-scale models. > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)?** No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps. ## Additional Information ### Dataset Curators From the paper: > Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps: Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. ### Licensing Information The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/ api-terms) and users must comply with Reddit User Agreeement, Content Policy, and Privacy Policy – all accessible at https://www.redditinc.com/policies. From the paper: > RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies. ### Citation Information ```bibtex @misc{desai2021redcaps, title={RedCaps: web-curated image-text data created by the people, for the people}, author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson}, year={2021}, eprint={2111.11431}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
gigant
null
null
null
false
34
false
gigant/oldbookillustrations_2
2022-08-03T17:35:37.000Z
null
false
e3d786d9d384232e7961c6303a9b5dba95ed8758
[]
[ "annotations_creators:expert-generated", "language:en", "language:fr", "language:de", "language_creators:expert-generated", "license:cc-by-nc-4.0", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "tags:lam", "tags:1800-1900", "task_categories:text-to-ima...
https://huggingface.co/datasets/gigant/oldbookillustrations_2/resolve/main/README.md
--- annotations_creators: - expert-generated language: - en - fr - de language_creators: - expert-generated license: - cc-by-nc-4.0 multilinguality: - multilingual pretty_name: Old Book Illustrations size_categories: - 1K<n<10K source_datasets: - original tags: - lam - 1800-1900 task_categories: - text-to-image - image-to-text - image-to-image task_ids: - image-captioning --- # Dataset Card for Old Book Illustrations ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://www.oldbookillustrations.com/)** ### Dataset Summary The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the [BigLAM initiative](https://huggingface.co/biglam). ### Languages The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German. For instance you can find this description that contains a French sentence: >The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro. ## Dataset Structure Each row contains information gathered from the page of an illustration on the website [Old Book Illustrations](https://www.oldbookillustrations.com/). As of July 2022, there are 4172 illustrations in this dataset. ### Data Fields * `rawscan`: the image as originally scanned from the book, without further processing * `1600px`: the cleaned image, resized to a width of 1600 pixels (height can vary) * `info_url`: URL to the illustration page on oldbookillustrations.com * `ìnfo_src`: URL to an icon-sized version of the image * `info_alt`: short description of the image * `artist_name`: artist name * `artist_date`: birth date of the artist * `artist_countries`: list of the countries the artist is from * `book_title`: original title of the book the illustration is extracted from * `book_authors`: list of the authors of the book * `book_publishers`: list of the publishers of the book * `openlibrary-url`: URL to the openlibrary entry for the book * `tags`: list of keywords for this illustration on oldbookillustrations.com * `illustration_source_name`: list of the sources for this illustration * `illustration_source_url`: list of the URL for these sources * `illustration_subject`: category of the subject represented in the illustration * `illustration_format`: category of the format of the illustration * `image_title`: title of the image * `image_caption`: caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language * `image_description`: longer description of the image. If there is one, it also quotes the caption in the original language * `rawscan_url`: URL to the rawscan image on oldbookillustration.com * `1600px_url`: URL to the cleaned image on oldbookillustration.com ## Dataset Creation ### Curation Rationale This collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/). This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check [Old Book Illustrations](https://www.oldbookillustrations.com/). ### Source Data #### Initial Data Collection and Normalization Initial data is gathered from the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The sources of the illustration scans are specified for each entry in the columns `illustration_source_name` and `illustration_source_url`. ### Personal and Sensitive Information The Old Book Illustrations' Terms and conditions reads: >OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate. ## Considerations for Using the Data ### Discussion of Biases The Old Book Illustrations' Terms and conditions reads: >OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate. ## Additional Information ### Dataset Curators The Old Book Illustrations collection is curated and maintained by the team of the [Old Book Illustrations website](https://www.oldbookillustrations.com/). ### Licensing Information [Old Book Illustrations](https://www.oldbookillustrations.com/) website reads: >We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from [*Early poems of William Morris*](https://www.oldbookillustrations.com/titles/early-poems-of-william-morris/) as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or [*Under the hill*](https://www.oldbookillustrations.com/titles/under-the-hill/) which was published in the US in 1928 and therefore is not public domain there. ### Citation Information ```bibtex @misc{old book illustrations_2007, url={https://www.oldbookillustrations.com/}, journal={Old Book Illustrations}, year={2007}} ``` ### Contributions Thanks to [@gigant](https://huggingface.co/gigant) ([@giganttheo](https://github.com/giganttheo)) for adding this dataset.
okite97
null
null
null
false
60
false
okite97/news-data
2022-08-25T10:36:01.000Z
null
false
2c53f4b94137892d96c3bc4272028c3354c640a7
[]
[ "annotations_creators:other", "language:en", "language_creators:found", "license:afl-3.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:text-classification", "task_ids:topic-classification", "task_ids:multi-class-classification" ]
https://huggingface.co/datasets/okite97/news-data/resolve/main/README.md
--- annotations_creators: - other language: - 'en' language_creators: - found license: - afl-3.0 multilinguality: - monolingual pretty_name: News Dataset size_categories: - 1K<n<10K source_datasets: - original tags: [] task_categories: - text-classification task_ids: - topic-classification - multi-class-classification --- # Dataset Card for news-data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Dataset Curators](#dataset-curators) ### Dataset Summary The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria. ### Supported Tasks and Leaderboards It supports news article classification into different categories. ### Languages English ## Dataset Structure ### Data Instances ''' {'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention' 'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of' 'Category': 'politics' 'labels': 2} ''' ### Data Fields * Title: a string containing the title of a news title as shown * Excerpt: a string containing a short extract from the body of the news * Category: a string that tells the category of an example (string label) * labels: integer telling the class of an example (label) ### Data Splits | Dataset Split | Number of instances in split | | ----------- | ----------- | | Train | 4,594 | | Paragraph | 811 | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The code for the dataset creation at *https://github.com/chimaobi-okite/NLP-Projects-Competitions/blob/main/NewsCategorization/Data/NewsDataScraping.ipynb*. The examples were scrapped from <https://www.arise.tv/> ### Annotations #### Annotation process The annotation is based on the news category in the [arisetv](https://www.arise.tv) website #### Who are the annotators? Journalists at arisetv ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop models that can classify news articles into categories. This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated. ### Discussion of Biases This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world with a slight degradation in performance. ### Dataset Curators The dataset is created by people at arise but was scrapped by [@github-chimaobi-okite](https://github.com/chimaobi-okite/)
Toygar
null
null
null
false
4
false
Toygar/turkish-offensive-language-detection
2022-10-21T12:14:32.000Z
null
false
456e0e150f62c719cc837db79e50d5448b0c0bd7
[]
[ "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language:tr", "license:cc-by-2.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "task_categories:text-classification", "tags:offensive-language-classification" ]
https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection/resolve/main/README.md
--- annotations_creators: - crowdsourced - expert-generated language_creators: - crowdsourced language: - tr license: - cc-by-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: [] task_categories: - text-classification task_ids: [] pretty_name: Turkish Offensive Language Detection Dataset tags: - offensive-language-classification --- # Dataset Summary This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem. In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task. The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets. # Dataset Structure A binary dataset with with (0) Not Offensive and (1) Offensive tweets. ### Task and Labels Offensive language identification: - (0) Not Offensive - Tweet does not contain offense or profanity. - (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense ### Data Splits | | train | test | dev | |------:|:------|:-----|:-----| | 0 (Not Offensive) | 22,589 | 4,436 | 1,402 | | 1 (Offensive) | 19,809 | 4,415 | 354 | ### Citation Information ``` BibTeX will be provided after publication. UBMK 2022 Paper: "Linguistic-based Data Augmentation Approach for Offensive Language Detection" ``` ### Paper codes https://github.com/toygarr/lingda # References We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied. - https://huggingface.co/datasets/offenseval2020_tr - https://github.com/imayda/turkish-hate-speech-dataset-2 - https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
biglam
null
null
null
false
1
false
biglam/archives_parlementaires_revolution_francaise
2022-09-05T11:53:04.000Z
null
false
734a6f81948727f4a41a98aaac68a8dc7cd86cd8
[]
[ "license:cc-by-4.0", "language:fr" ]
https://huggingface.co/datasets/biglam/archives_parlementaires_revolution_francaise/resolve/main/README.md
--- license: cc-by-4.0 language: fr ---
DFKI-SLT
null
@inproceedings{lauscher2018b, title = {An argument-annotated corpus of scientific publications}, booktitle = {Proceedings of the 5th Workshop on Mining Argumentation}, publisher = {Association for Computational Linguistics}, author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo}, address = {Brussels, Belgium}, year = {2018}, pages = {40–46} }
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of scientific writing.
false
1
false
DFKI-SLT/sciarg
2022-07-28T14:04:31.000Z
null
false
15ba2479192e7cf974e4e295a7d721a650c06f03
[]
[ "annotations_creators:expert-generated", "language:en", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:dr inventor corpus", "tags:argument mining", "tags:scientific text", "tags:relation extraction", "tags:argumentative discourse u...
https://huggingface.co/datasets/DFKI-SLT/sciarg/resolve/main/README.md
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: [] multilinguality: - monolingual pretty_name: SciArg size_categories: - 1K<n<10K source_datasets: - dr inventor corpus tags: - argument mining - scientific text - relation extraction - argumentative discourse unit recognition task_categories: - token-classification task_ids: [] --- # Dataset Card for "sciarg" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci) - **Repository:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci) - **Paper:** [An argument-annotated corpus of scientific publications](https://aclanthology.org/W18-5206.pdf) - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of scientific writing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `document_id`: the base file name, e.g. "A28" - `text`: the parsed text of the scientific publication in the XML format - `text_bound_annotations`: span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: `offsets`, `text`, `type`, and `id`. - `relations`: binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: `id`, `head`, `tail`, and `type` where `head` and `tail` each have the fields: `ref_id` and `role`. ### Data Splits The dataset consists of a single `train` split that has 40 documents. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{lauscher2018b, title = {An argument-annotated corpus of scientific publications}, booktitle = {Proceedings of the 5th Workshop on Mining Argumentation}, publisher = {Association for Computational Linguistics}, author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo}, address = {Brussels, Belgium}, year = {2018}, pages = {40–46} } ``` ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
vincentclaes
null
null
null
false
1
false
vincentclaes/emoji-predictor
2022-09-20T14:38:38.000Z
null
false
0af1841a59d37a07091ea69bce12947558fa4d55
[]
[]
https://huggingface.co/datasets/vincentclaes/emoji-predictor/resolve/main/README.md
# Emoji Predictor Dataset consists of raw tweets as text and an emoji as the label. original dataset: https://huggingface.co/datasets/AlekseyDorkin/extended_tweet_emojis - Fine-tuned model: https://huggingface.co/vincentclaes/emoji-predictor - Try the model here: https://huggingface.co/spaces/vincentclaes/emoji-predictor
ChristophSchuhmann
null
null
null
false
1
false
ChristophSchuhmann/LAION-5B-EN-Aesthetics-Subset_above_5.0
2022-07-28T16:08:42.000Z
null
false
5794e0a3cecf4fd9a213b8077255cc792dbf4c17
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ChristophSchuhmann/LAION-5B-EN-Aesthetics-Subset_above_5.0/resolve/main/README.md
--- license: apache-2.0 ---
ChristophSchuhmann
null
null
null
false
1
false
ChristophSchuhmann/LAION-5B-EN-Aesthetics-Subset_above_6
2022-07-28T16:09:21.000Z
null
false
ab6c512f3f9f5573805b1246a2d9e79a9e9bf070
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ChristophSchuhmann/LAION-5B-EN-Aesthetics-Subset_above_6/resolve/main/README.md
--- license: apache-2.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-ce219d86-12025605
2022-07-28T21:06:06.000Z
null
false
e81ff8291dc22db23b272e9a5c393d322e530891
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/SumPubmed" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-ce219d86-12025605/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/SumPubmed eval_info: task: summarization model: Blaise-g/led_finetuned_sumpubmed metrics: ['bertscore'] dataset_name: Blaise-g/SumPubmed dataset_config: Blaise-g--SumPubmed dataset_split: test col_mapping: text: text target: abstract --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/led_finetuned_sumpubmed * Dataset: Blaise-g/SumPubmed * Config: Blaise-g--SumPubmed * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-ca1f103f-12035606
2022-07-28T20:34:23.000Z
null
false
49bca9d76447b7dbe452b2a8a4426155c28df4ba
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-ca1f103f-12035606/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: nbroad/longt5-base-global-mediasum metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: nbroad/longt5-base-global-mediasum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-20a28003-12045607
2022-07-28T20:27:48.000Z
null
false
7b01ec427ea3d0e879e4e26ca3cdfa5ce6526ca9
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-20a28003-12045607/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: nbroad/longt5-base-global-mediasum metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: nbroad/longt5-base-global-mediasum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
alkzar90
null
null
null
false
2
false
alkzar90/croupier-mtg-dataset
2022-08-02T01:41:48.000Z
null
false
399ed23149edf1be91a18fd8e60e3fea25262dfc
[]
[ "annotations_creators:found", "license:apache-2.0", "size_categories:1K<n<10K", "source_datasets:original", "tags:mgt", "tags:magic-card-game", "tags:creature-dataset", "task_categories:image-classification", "task_ids:multi-class-image-classification" ]
https://huggingface.co/datasets/alkzar90/croupier-mtg-dataset/resolve/main/README.md
--- annotations_creators: - found language: [] language_creators: [] license: - apache-2.0 multilinguality: [] pretty_name: 'Croupier: a Magic the Gathering creatures dataset' size_categories: - 1K<n<10K source_datasets: - original tags: - mgt - magic-card-game - creature-dataset task_categories: - image-classification task_ids: - multi-class-image-classification --- ## Dataset Description - **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/) - **Repository:** https://github.com/alcazar90/croupier-mtg-dataset ### Dataset Summary A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie. ## Dataset Creation All card information from Magic the Gathering card game is public available from the [Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just a subset selection of 4 kind of creatures from the game.
OATML-Markslab
null
null
null
false
19,343
false
OATML-Markslab/ProteinGym
2022-07-29T00:12:02.000Z
null
false
4075aa679683f3071d527283819637f3446ca488
[]
[ "arxiv:2205.13760" ]
https://huggingface.co/datasets/OATML-Markslab/ProteinGym/resolve/main/README.md
## ProteinGym benchmarks overview ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays. Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables: 1) mutant (str): - for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N') - for the indel benchmark, it corresponds to the full mutated sequence 2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein 3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit) Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular: - The UniProt_ID of the corresponding protein, along with taxon and MSA depth category - The target sequence (target_seq) used in the assay - Details on how the DMS_score was created from the raw files and how it was binarized ## Reference If you use ProteinGym in your work, please cite the following paper: ``` Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML. ``` ## Links - Pre-print: https://arxiv.org/abs/2205.13760 - Code: https://github.com/OATML-Markslab/Tranception
ICML2022
null
null
null
false
1
false
ICML2022/ProteinGym
2022-07-29T00:19:31.000Z
null
false
e936ae69e3c70ff651d47889a389de6f596863b2
[]
[ "arxiv:2205.13760" ]
https://huggingface.co/datasets/ICML2022/ProteinGym/resolve/main/README.md
## ProteinGym benchmarks overview ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays. Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables: 1) mutant (str): - for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N') - for the indel benchmark, it corresponds to the full mutated sequence 2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein 3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit) Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular: - The UniProt_ID of the corresponding protein, along with taxon and MSA depth category - The target sequence (target_seq) used in the assay - Details on how the DMS_score was created from the raw files and how it was binarized ## Reference If you use ProteinGym in your work, please cite the following paper: ``` Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML. ``` ## Links - Pre-print: https://arxiv.org/abs/2205.13760 - Code: https://github.com/OATML-Markslab/Tranception
biglam
null
@dataset{clerice_thibault_2022_6827706, author = {Clérice, Thibault}, title = {YALTAi: Tabular Dataset}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6827706}, url = {https://doi.org/10.5281/zenodo.6827706} }
Yalt AI Tabular Dataset
false
3
false
biglam/yalta_ai_tabular_dataset
2022-10-23T21:56:38.000Z
null
false
65d7baf884b0ca8c02ad1f678b83904ccc1d2062
[]
[ "arxiv:2207.11230", "annotations_creators:expert-generated", "language_creators:expert-generated", "license:cc-by-4.0", "size_categories:n<1K", "tags:manuscripts", "tags:LAM", "task_categories:object-detection" ]
https://huggingface.co/datasets/biglam/yalta_ai_tabular_dataset/resolve/main/README.md
--- annotations_creators: - expert-generated language: [] language_creators: - expert-generated license: - cc-by-4.0 multilinguality: [] pretty_name: YALTAi Tabular Dataset size_categories: - n<1K source_datasets: [] tags: - manuscripts - LAM task_categories: - object-detection task_ids: [] --- # YALTAi Tabular Dataset ## Table of Contents - [YALTAi Tabular Dataset](#YALTAi-Tabular-Dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://doi.org/10.5281/zenodo.6827706](https://doi.org/10.5281/zenodo.6827706) - **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230) ### Dataset Summary This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text". ### Supported Tasks and Leaderboards - `object-detection`: This dataset can be used to train a model for object-detection on historic document images. ## Dataset Structure This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines. - The first configuration, `YOLO`, uses the data's original format. - The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection, which expect data to be in a COCO style format. ### Data Instances An example instance from the COCO config: ``` {'height': 2944, 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>, 'image_id': 0, 'objects': [{'area': 435956, 'bbox': [0.0, 244.0, 1493.0, 292.0], 'category_id': 0, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 88234, 'bbox': [305.0, 127.0, 562.0, 157.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5244, 'bbox': [1416.0, 196.0, 92.0, 57.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5720, 'bbox': [1681.0, 182.0, 88.0, 65.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 374085, 'bbox': [0.0, 540.0, 163.0, 2295.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 577599, 'bbox': [104.0, 537.0, 253.0, 2283.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 598670, 'bbox': [304.0, 533.0, 262.0, 2285.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 56, 'bbox': [284.0, 539.0, 8.0, 7.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 1868412, 'bbox': [498.0, 513.0, 812.0, 2301.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 307800, 'bbox': [1250.0, 512.0, 135.0, 2280.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 494109, 'bbox': [1330.0, 503.0, 217.0, 2277.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 52, 'bbox': [1734.0, 1013.0, 4.0, 13.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 90666, 'bbox': [0.0, 1151.0, 54.0, 1679.0], 'category_id': 1, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}], 'width': 2064} ``` An example instance from the YOLO config: ``` python {'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>, 'objects': {'bbox': [[747, 390, 1493, 292], [586, 206, 562, 157], [1463, 225, 92, 57], [1725, 215, 88, 65], [80, 1688, 163, 2295], [231, 1678, 253, 2283], [435, 1675, 262, 2285], [288, 543, 8, 7], [905, 1663, 812, 2301], [1318, 1653, 135, 2280], [1439, 1642, 217, 2277], [1737, 1019, 4, 13], [26, 1991, 54, 1679]], 'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}} ``` ### Data Fields The fields for the YOLO config: - `image`: the image - `objects`: the annotations which consist of: - `bbox`: a list of bounding boxes for the image - `label`: a list of labels for this image The fields for the COCO config: - `height`: height of the image - `width`: width of the image - `image`: image - `image_id`: id for the image - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys: - `bbox`: bounding boxes for the images - `category_id`: a label for the image - `image_id`: id for the image - `iscrowd`: COCO `iscrowd` flag - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts) ### Data Splits The dataset contains a train, validation and test split with the following numbers per split: | | train | validation | test | |----------|-------|------------|------| | examples | 196 | 22 | 135 | ## Dataset Creation > [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8 . ### Curation Rationale This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires), which was found to contain: > around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8 ### Source Data #### Initial Data Collection and Normalization The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture. > The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745. #### Who are the source language producers? [More information needed] ### Annotations | | Train | Dev | Test | Total | Average area | Median area | |----------|-------|-----|------|-------|--------------|-------------| | Col | 724 | 105 | 829 | 1658 | 9.32 | 6.33 | | Header | 103 | 15 | 42 | 160 | 6.78 | 7.10 | | Marginal | 60 | 8 | 0 | 68 | 0.70 | 0.71 | | Text | 13 | 5 | 0 | 18 | 0.01 | 0.00 | | | | | - | | | | #### Annotation process [More information needed] #### Who are the annotators? [More information needed] ### Personal and Sensitive Information This data does not contain information relating to living individuals. ## Considerations for Using the Data ### Social Impact of Dataset A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition. ### Discussion of Biases Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed. ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{clerice_thibault_2022_6827706, author = {Clérice, Thibault}, title = {YALTAi: Tabular Dataset}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6827706}, url = {https://doi.org/10.5281/zenodo.6827706} } ``` [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6827706.svg)](https://doi.org/10.5281/zenodo.6827706) ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
crazyofapple
null
null
null
false
1
false
crazyofapple/CME-Chinese
2022-07-29T07:39:55.000Z
null
false
3ab203bc05d2e413b5d7ac87c5329a18bb0539a9
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/crazyofapple/CME-Chinese/resolve/main/README.md
--- license: apache-2.0 ---
PaddlePaddle
null
null
Duconv is a chinese conversation dataset, designed to evaluate the dialogue models.
false
7
false
PaddlePaddle/duconv
2022-07-29T11:44:00.000Z
null
false
2080deae0c89256bb023ad321b453dec5971b61a
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/PaddlePaddle/duconv/resolve/main/README.md
--- license: apache-2.0 ---
awacke1
null
null
null
false
1
false
awacke1/DNA-Aaron-C-Wacker-Open-Source-Genome-Project
2022-07-29T16:50:05.000Z
null
false
a50258122840d6603aa487849c3bbc60514998fd
[]
[ "license:mit" ]
https://huggingface.co/datasets/awacke1/DNA-Aaron-C-Wacker-Open-Source-Genome-Project/resolve/main/README.md
--- license: mit ---
pinecone
null
null
null
false
1
false
pinecone/dl-doc-search
2022-07-29T18:39:12.000Z
null
false
17a4a3f0eec731d9559d68707b3ce65bffc4bcf5
[]
[]
https://huggingface.co/datasets/pinecone/dl-doc-search/resolve/main/README.md
language: - en language_creators: - found multilinguality: - monolingual pretty_name: hello size_categories: - '100K<n<1M
LiptaphX
null
null
null
false
1
false
LiptaphX/deneme
2022-07-29T21:33:01.000Z
null
false
56834ba511d9eea394d1441de14c7da21bb23113
[]
[ "license:afl-3.0" ]
https://huggingface.co/datasets/LiptaphX/deneme/resolve/main/README.md
--- license: afl-3.0 ---
carbon225
null
null
null
false
1
false
carbon225/lichess-elite
2022-07-31T19:41:07.000Z
null
false
467c261e5016e4eede158b8f6cea7e0cbdb3f1ab
[]
[ "license:cc0-1.0" ]
https://huggingface.co/datasets/carbon225/lichess-elite/resolve/main/README.md
--- license: cc0-1.0 ---
thocheat
null
null
null
false
1
false
thocheat/vlsp
2022-08-01T08:39:05.000Z
null
false
285490f2389cc194eb763409721ef3cf6d8fb075
[]
[ "license:other" ]
https://huggingface.co/datasets/thocheat/vlsp/resolve/main/README.md
--- license: other ---
Yehor
null
null
null
false
1
false
Yehor/voa-uk-transcriptions
2022-09-10T10:07:34.000Z
null
false
ec4e46722c866c0e0bf1ad561b7bb8a4a5068995
[]
[ "language:uk", "license:cc-by-4.0" ]
https://huggingface.co/datasets/Yehor/voa-uk-transcriptions/resolve/main/README.md
--- language: - uk license: cc-by-4.0 --- This repository contains transcriptions with other metadata for the VOA Ukrainian dataset (~398h). Usage: ```python from datasets import load_dataset ds = load_dataset('Yehor/voa-uk-transcriptions', split='train') for row in ds: print(row['text']) ```
JetsonEarth
null
null
null
false
1
false
JetsonEarth/jet_funsd
2022-07-30T14:49:35.000Z
null
false
1c0214d65571139d86b310eadb2e6615be0df374
[]
[]
https://huggingface.co/datasets/JetsonEarth/jet_funsd/resolve/main/README.md
FUNSD dataset
JetsonEarth
null
null
null
false
1
false
JetsonEarth/jetson_funsd
2022-07-30T15:28:55.000Z
null
false
50b19f4267f1528ffa926fe0112935d5bdf17597
[]
[]
https://huggingface.co/datasets/JetsonEarth/jetson_funsd/resolve/main/README.md
FUNSD
jordiae
null
@inproceedings{10.1145/3520312.3534867, author = {Armengol-Estap\'{e}, Jordi and Woodruff, Jackson and Brauckmann, Alexander and Magalh\~{a}es, Jos\'{e} Wesley de Souza and O'Boyle, Michael F. P.}, title = {ExeBench: An ML-Scale Dataset of Executable C Functions}, year = {2022}, isbn = {9781450392730}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3520312.3534867}, doi = {10.1145/3520312.3534867}, abstract = {Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.}, booktitle = {Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming}, pages = {50–59}, numpages = {10}, keywords = {Code Dataset, Program Synthesis, Mining Software Repositories, C, Machine Learning for Code, Compilers}, location = {San Diego, CA, USA}, series = {MAPS 2022} }
An ML-scale dataset of executable C functions
false
228
false
jordiae/exebench
2022-10-10T11:35:30.000Z
null
false
b8a9882475b1a71dc05f4d4bf292bc7a60d3f175
[]
[]
https://huggingface.co/datasets/jordiae/exebench/resolve/main/README.md
# ExeBench: an ML-scale dataset of executable C functions ExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages. Please read the paper for more information: https://dl.acm.org/doi/abs/10.1145/3520312.3534867. Please see `examples/` in https://github.com/jordiae/exebench for examples. ## Usage ### Option 1: Using the helpers in this repo ``` git clone https://github.com/jordiae/exebench.git cd exebench/ python -m venv venv source venv/bin/activate pip install -r requirements_examples.txt PYTHONPATH="${PYTHONPATH}:${pwd}" python examples/basic.py ``` ### Option 2: Directly using the Hugginface Datasets library ``` !pip install datasets zstandard # Load dataset split. In this case, synthetic test split dataset = load_dataset('jordiae/exebench', split='test_synth') for e in dataset: ... ``` ### Option 3: Directly download the dataset Take a look at the files at: https://huggingface.co/datasets/jordiae/exebench/tree/main The dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard. ## Statistics and versions This release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions: ``` train_not_compilable: 2.357M train_synth_compilable: 2.308373M train_real_compilable: 0.675074M train_synth_simple_io: 0.550116M train_real_simple_io: 0.043769M train_synth_rich_io: 0.097250M valid_synth: 5k valid_real: 2.133k test_synth: 5k test_real: 2.134k ``` The original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: https://huggingface.co/datasets/jordiae/exebench_legacy (please reach out for access) ## License All C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license. ## Citation ``` @inproceedings{10.1145/3520312.3534867, author = {Armengol-Estap\'{e}, Jordi and Woodruff, Jackson and Brauckmann, Alexander and Magalh\~{a}es, Jos\'{e} Wesley de Souza and O'Boyle, Michael F. P.}, title = {ExeBench: An ML-Scale Dataset of Executable C Functions}, year = {2022}, isbn = {9781450392730}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3520312.3534867}, doi = {10.1145/3520312.3534867}, abstract = {Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.}, booktitle = {Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming}, pages = {50–59}, numpages = {10}, keywords = {Code Dataset, Program Synthesis, Mining Software Repositories, C, Machine Learning for Code, Compilers}, location = {San Diego, CA, USA}, series = {MAPS 2022} } ``` ## Credits We thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: https://github.com/ltcmelo/psychec ## Contact ``` jordi.armengol.estape at ed.ac.uk ```
bigscience
null
@misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} }
xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
false
null
false
bigscience/xP3all
2022-11-04T01:56:31.000Z
null
false
867224acc89ef9d5dafa13194fb26b21adc4f4b1
[]
[ "arxiv:2211.01786", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language:ak", "language:ar", "language:as", "language:bm", "language:bn", "language:ca", "language:code", "language:en", "language:es", "language:eu", "language:fon", "language:fr", "lang...
https://huggingface.co/datasets/bigscience/xP3all/resolve/main/README.md
--- annotations_creators: - expert-generated - crowdsourced language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript license: - apache-2.0 multilinguality: - multilingual pretty_name: xP3 size_categories: - 100M<n<1B task_categories: - other --- # Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + our evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.33| |bm|107056|0.11|265180|0.33| |ak|108096|0.11|265071|0.33| |ca|110608|0.11|271191|0.33| |eu|113008|0.11|281199|0.35| |fon|113072|0.11|265063|0.33| |st|114080|0.11|265063|0.33| |ki|115040|0.12|265180|0.33| |tum|116032|0.12|265063|0.33| |wo|122560|0.12|365063|0.45| |ln|126304|0.13|365060|0.45| |as|156256|0.16|265063|0.33| |or|161472|0.16|265063|0.33| |kn|165456|0.17|265063|0.33| |ml|175040|0.18|265864|0.33| |rn|192992|0.19|318189|0.39| |nso|229712|0.23|915051|1.13| |tn|235536|0.24|915054|1.13| |lg|235936|0.24|915021|1.13| |rw|249360|0.25|915043|1.13| |ts|250256|0.25|915044|1.13| |sn|252496|0.25|865056|1.07| |xh|254672|0.26|915058|1.13| |zu|263712|0.26|915061|1.13| |ny|272128|0.27|915063|1.13| |ig|325232|0.33|950097|1.17| |yo|352784|0.35|918416|1.13| |ne|393680|0.39|315754|0.39| |pa|523248|0.52|339210|0.42| |gu|560688|0.56|347499|0.43| |sw|566656|0.57|1130481|1.4| |mr|666240|0.67|417269|0.52| |bn|832720|0.83|428843|0.53| |ta|926912|0.93|415433|0.51| |te|1343232|1.35|584590|0.72| |ur|1918272|1.92|855756|1.06| |vi|3102512|3.11|1672106|2.07| |code|4330752|4.34|2707724|3.34| |hi|4403568|4.41|1554667|1.92| |zh|4599440|4.61|3589234|4.43| |id|4612256|4.62|2643418|3.27| |ar|4683456|4.69|2160181|2.67| |fr|6591120|6.6|5316403|6.57| |pt|6886800|6.9|3752156|4.63| |es|8587920|8.6|5413205|6.69| |en|39252528|39.33|32740750|40.44| |total|99807184|100.0|80956089|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval) - Natural Language Inference - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) #### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets - Coreference Resolution - - [WSC (Fixed)](https://huggingface.co/datasets/super_glue) - Sentence Completion - [HellaSwag](https://huggingface.co/datasets/hellaswag) - Translation - [MultiEurlex](https://huggingface.co/datasets/multi_eurlex) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
alvations
null
null
null
false
1
false
alvations/greg-eval
2022-07-31T21:42:32.000Z
null
false
8eaa388a192aa57a7f0d34a8b3757c6a3d14b712
[]
[ "license:cc0-1.0" ]
https://huggingface.co/datasets/alvations/greg-eval/resolve/main/README.md
--- license: cc0-1.0 ---