author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
gere
null
null
null
false
2
false
gere/Dataset
2022-08-05T15:03:20.000Z
null
false
f8190b3b6133c6a3bcc36ac328c4639038bde8d8
[]
[]
https://huggingface.co/datasets/gere/Dataset/resolve/main/README.md
simonduerr
null
null
null
false
2
false
simonduerr/inversefolding
2022-08-05T23:28:08.000Z
null
false
5a44bca6c3dea67e08711345f5187835f8dbda6e
[]
[ "license:odc-by" ]
https://huggingface.co/datasets/simonduerr/inversefolding/resolve/main/README.md
--- license: odc-by ---
quantity
null
null
null
false
2
false
quantity/mydataset1
2022-08-06T00:06:21.000Z
null
false
505b42d138140786fc9632bfea619eb6ebb9ea87
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/quantity/mydataset1/resolve/main/README.md
--- license: apache-2.0 ---
quantity
null
null
null
false
3
false
quantity/model7
2022-08-06T00:06:50.000Z
null
false
937b4e764f7988566909e6f68fd8bbe0c4359514
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/quantity/model7/resolve/main/README.md
--- license: apache-2.0 ---
salim-ingram
null
null
null
false
3
false
salim-ingram/philosophy_quotes
2022-08-06T01:42:59.000Z
null
false
73747883223e6886f5b304e180e04254fb2d4f41
[]
[ "license:wtfpl" ]
https://huggingface.co/datasets/salim-ingram/philosophy_quotes/resolve/main/README.md
--- license: wtfpl ---
jakartaresearch
null
null
This dataset is built as a playground for beginner to make a use case for creating sentiment analysis model.
false
10
false
jakartaresearch/google-play-review
2022-08-06T16:24:49.000Z
null
false
4030949b0360722d8853eb01d407393de0b40bad
[]
[ "annotations_creators:found", "language:id", "language_creators:found", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "tags:sentiment", "tags:google-play", "tags:indonesian", "task_categories:text-classification", "task_ids:sentimen...
https://huggingface.co/datasets/jakartaresearch/google-play-review/resolve/main/README.md
--- annotations_creators: - found language: - id language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Indonesian Google Play Review size_categories: - 1K<n<10K source_datasets: - original tags: - sentiment - google-play - indonesian task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for Indonesian Google Play Review ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Scrapped from e-commerce app on Google Play. ### Supported Tasks and Leaderboards Sentiment Analysis ### Languages Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
autoevaluate
null
null
null
false
3
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575677
2022-08-06T12:52:18.000Z
null
false
e758e7c5ea70be1fcfd0287c8a798ff91ff6e3d4
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/SumPubmed" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575677/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/SumPubmed eval_info: task: summarization model: L-macc/autotrain-Biomedical_sc_summ-1217846148 metrics: [] dataset_name: Blaise-g/SumPubmed dataset_config: Blaise-g--SumPubmed dataset_split: test col_mapping: text: text target: abstract --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: L-macc/autotrain-Biomedical_sc_summ-1217846148 * Dataset: Blaise-g/SumPubmed * Config: Blaise-g--SumPubmed * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model.
autoevaluate
null
null
null
false
2
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575678
2022-08-06T13:16:16.000Z
null
false
53ad23a7638e94f869adadb1bad94c93d6de0854
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/SumPubmed" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575678/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/SumPubmed eval_info: task: summarization model: L-macc/autotrain-Biomedical_sc_summ-1217846144 metrics: [] dataset_name: Blaise-g/SumPubmed dataset_config: Blaise-g--SumPubmed dataset_split: test col_mapping: text: text target: abstract --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: L-macc/autotrain-Biomedical_sc_summ-1217846144 * Dataset: Blaise-g/SumPubmed * Config: Blaise-g--SumPubmed * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model.
autoevaluate
null
null
null
false
2
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575679
2022-08-06T13:52:49.000Z
null
false
3ea2191ea55e1d81f858bec4b51fb42cda713184
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/SumPubmed" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575679/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/SumPubmed eval_info: task: summarization model: L-macc/autotrain-Biomedical_sc_summ-1217846142 metrics: [] dataset_name: Blaise-g/SumPubmed dataset_config: Blaise-g--SumPubmed dataset_split: test col_mapping: text: text target: abstract --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: L-macc/autotrain-Biomedical_sc_summ-1217846142 * Dataset: Blaise-g/SumPubmed * Config: Blaise-g--SumPubmed * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model.
autoevaluate
null
null
null
false
3
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c887ce73-12585680
2022-08-06T16:29:37.000Z
null
false
2f911a890c1c1b9220100b4c83cfec52bc6cfe96
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/SumPubmed" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c887ce73-12585680/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/SumPubmed eval_info: task: summarization model: Blaise-g/long_t5_global_large_pubmed_wip2 metrics: [] dataset_name: Blaise-g/SumPubmed dataset_config: Blaise-g--SumPubmed dataset_split: test col_mapping: text: text target: abstract --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/long_t5_global_large_pubmed_wip2 * Dataset: Blaise-g/SumPubmed * Config: Blaise-g--SumPubmed * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
dali-does
null
@misc{https://doi.org/10.48550/arxiv.2208.05358, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren and Abraham, Savitha Sam}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} }
CLEVR-Math is a dataset for compositional language, visual and mathematical reasoning. CLEVR-Math poses questions about mathematical operations on visual scenes using subtraction and addition, such as "Remove all large red cylinders. How many objects are left?". There are also adversarial (e.g. "Remove all blue cubes. How many cylinders are left?") and multihop questions (e.g. "Remove all blue cubes. Remove all small purple spheres. How many objects are left?").
false
446
false
dali-does/clevr-math
2022-10-31T11:28:31.000Z
null
false
6a30110f887edd7edbad033275aa853ddd8c4a26
[]
[ "arxiv:2208.05358", "annotations_creators:machine-generated", "language:en", "language_creators:machine-generated", "license:cc-by-4.0", "multilinguality:monolingual", "source_datasets:clevr", "tags:reasoning", "tags:neuro-symbolic", "tags:multimodal", "task_categories:visual-question-answering"...
https://huggingface.co/datasets/dali-does/clevr-math/resolve/main/README.md
--- annotations_creators: - machine-generated language: - en language_creators: - machine-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: CLEVR-Math - Compositional language, visual, and mathematical reasoning size_categories: #- 100K<n<1M source_datasets: [clevr] tags: - reasoning - neuro-symbolic - multimodal task_categories: - visual-question-answering task_ids: - visual-question-answering --- # Dataset Card for CLEVR-Math ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:*https://github.com/dali-does/clevr-math* - **Paper:*https://arxiv.org/abs/2208.05358* - **Leaderboard:** - **Point of Contact:*dali@cs.umu.se* ### Dataset Summary Dataset for compositional multimodal mathematical reasoning based on CLEVR. #### Loading the data, preprocessing text with CLIP ``` from transformers import CLIPPreprocessor from datasets import load_dataset, DownloadConfig dl_config = DownloadConfig(resume_download=True, num_proc=8, force_download=True) # Load 'general' instance of dataset dataset = load_dataset('dali-does/clevr-math', download_config=dl_config) # Load version with only multihop in test data dataset_multihop = load_dataset('dali-does/clevr-math', 'multihop', download_config=dl_config) model_path = "openai/clip-vit-base-patch32" extractor = CLIPProcessor.from_pretrained(model_path) def transform_tokenize(e): e['image'] = [image.convert('RGB') for image in e['image']] return extractor(text=e['question'], images=e['image'], padding=True) dataset = dataset.map(transform_tokenize, batched=True, num_proc=8, padding='max_length') dataset_subtraction = dataset.filter(lambda e: e['template'].startswith('subtraction'), num_proc=4) ``` ### Supported Tasks and Leaderboards Leaderboard will be announced at a later date. ### Languages The dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language. ## Dataset Structure ### Data Instances * `general` containing the default version with multihop questions in train and test * `multihop` containing multihop questions only in test data to test generalisation of reasoning ### Data Fields ``` features = datasets.Features( { "template": datasets.Value("string"), "id": datasets.Value("string"), "question": datasets.Value("string"), "image": datasets.Image(), "label": datasets.Value("int64") } ) ``` ### Data Splits train/val/test ## Dataset Creation Data is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators. ## Considerations for Using the Data ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Adam Dahlgren Lindström - dali@cs.umu.se ### Licensing Information Licensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0). ### Citation Information [More Information Needed] ``` @misc{https://doi.org/10.48550/arxiv.2208.05358, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren and Abraham, Savitha Sam}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ``` ### Contributions Thanks to [@dali-does](https://github.com/dali-does) for adding this dataset.
chinoll
null
null
null
false
1
false
chinoll/ACGVoice
2022-08-06T14:35:21.000Z
null
false
d05b50253b2cc0a1742dbc0f5cc0f76de4c1a301
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/chinoll/ACGVoice/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 ---
pcuenq
null
null
null
false
52
false
pcuenq/oxford-pets
2022-08-06T16:01:34.000Z
null
false
2c628097f293a86bdba429379dbb91c0952415eb
[]
[ "tags:pets", "tags:oxford", "license:cc-by-sa-4.0", "source_datasets:https://www.robots.ox.ac.uk/~vgg/data/pets/", "task_categories:image-classification" ]
https://huggingface.co/datasets/pcuenq/oxford-pets/resolve/main/README.md
--- tags: - pets - oxford license: cc-by-sa-4.0 license_details: https://www.robots.ox.ac.uk/~vgg/data/pets/ pretty_name: Oxford-IIIT Pet Dataset (no annotations) source_datasets: https://www.robots.ox.ac.uk/~vgg/data/pets/ task_categories: - image-classification --- # Oxford-IIIT Pet Dataset Images from [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/). Only images and labels have been pushed, segmentation annotations were ignored. - **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/pets/ License: Same as the original dataset.
Qilex
null
null
null
false
2
false
Qilex/EN-MEspecialChars
2022-08-06T21:38:43.000Z
null
false
d598048f46b2f7796dcf3f29f969dd53114d13af
[]
[ "language:en", "language:me", "license:afl-3.0", "multilinguality:translation", "size_categories:10K<n<100K", "tags:middle english", "task_categories:translation" ]
https://huggingface.co/datasets/Qilex/EN-MEspecialChars/resolve/main/README.md
--- annotations_creators: [] language: - en - me language_creators: [] license: - afl-3.0 multilinguality: - translation pretty_name: EN-MEspecialChars size_categories: - 10K<n<100K source_datasets: [] tags: - middle english task_categories: - translation task_ids: [] --- EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Lydgate, John Wycliffe, and the Gawain Poet. It includes special characters such as þ. There is mild standardization, but this dataset reflects the spelling inconsistencies characteristic of Middle English.
VanessaSchenkel
null
null
null
false
6
false
VanessaSchenkel/handmade-dataset
2022-08-06T22:11:34.000Z
null
false
b839c6ac6fc3fbf9ed2c3926433196b35f72afb9
[]
[ "annotations_creators:found", "language:en", "language:pt", "language_creators:found", "license:afl-3.0", "multilinguality:translation", "size_categories:n<1K", "source_datasets:original", "task_categories:translation" ]
https://huggingface.co/datasets/VanessaSchenkel/handmade-dataset/resolve/main/README.md
--- annotations_creators: - found language: - en - pt language_creators: - found license: - afl-3.0 multilinguality: - translation pretty_name: VanessaSchenkel/handmade-dataset size_categories: - n<1K source_datasets: - original tags: [] task_categories: - translation task_ids: [] --- Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences. How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/handmade-dataset", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['id', 'translation'], num_rows: 388 }) }) ``` Exemple: ``` remote_dataset["train"][5] ``` Output: ``` {'id': '5', 'translation': {'english': 'the postman finished her work .', 'portuguese': 'A carteira terminou seu trabalho .'}} ```
VanessaSchenkel
null
null
null
false
6
false
VanessaSchenkel/opus_books_en_pt
2022-08-06T22:46:10.000Z
null
false
7dd7ea5bc04520e2d01b963a15830ebff6e5db4b
[]
[ "annotations_creators:found", "language:en", "language:pt", "language_creators:found", "license:afl-3.0", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|opus_books", "task_categories:translation" ]
https://huggingface.co/datasets/VanessaSchenkel/opus_books_en_pt/resolve/main/README.md
--- annotations_creators: - found language: - en - pt language_creators: - found license: - afl-3.0 multilinguality: - translation pretty_name: VanessaSchenkel/opus_books_en_pt size_categories: - 1K<n<10K source_datasets: - extended|opus_books tags: [] task_categories: - translation task_ids: [] --- How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/opus_books_en_pt", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['id', 'translation'], num_rows: 1404 }) }) ``` Exemple: ``` remote_dataset["train"][5] ``` Output: ``` {'id': '5', 'translation': {'en': "There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear!", 'pt': 'Não havia nada de tão extraordinário nisso; nem Alice achou assim tão fora do normal ouvir o Coelho dizer para si mesmo: —"Oh, céus!'}} ```
jakartaresearch
null
null
This dataset is built as a playground for beginner to make a use case for creating sentiment analysis model.
false
3
false
jakartaresearch/indonews
2022-08-07T04:27:54.000Z
null
false
d628ab354f86c439b1eb1db39b3dc6cde6497346
[]
[ "annotations_creators:found", "language:id", "language_creators:found", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "tags:news", "tags:news-classifcation", "tags:indonesia", "task_categories:text-classification", "task_ids:multi-c...
https://huggingface.co/datasets/jakartaresearch/indonews/resolve/main/README.md
--- annotations_creators: - found language: - id language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Indonews size_categories: - 1K<n<10K source_datasets: - original tags: - news - news-classifcation - indonesia task_categories: - text-classification task_ids: - multi-class-classification --- # Indonesian News Categorization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
jakartaresearch
null
null
This dataset is built for text generation task in context of poem tweets in Bahasa.
false
8
false
jakartaresearch/poem-tweets
2022-08-07T08:54:18.000Z
null
false
c73fd7730502cf3694ce5072b899b6ee6ac2bebf
[]
[ "annotations_creators:no-annotation", "language:id", "language_creators:found", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "tags:poem", "tags:tweets", "tags:twitter", "tags:indonesian", "task_categories:text-generation", "tas...
https://huggingface.co/datasets/jakartaresearch/poem-tweets/resolve/main/README.md
--- annotations_creators: - no-annotation language: - id language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: poem_tweets size_categories: - 10K<n<100K source_datasets: - original tags: - poem - tweets - twitter - indonesian task_categories: - text-generation task_ids: - language-modeling --- # Dataset Card for Poem Tweets ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic ### Supported Tasks and Leaderboards - Text Generation - Language Model ### Languages Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
n6L3
null
null
null
false
1
false
n6L3/tabular
2022-08-07T10:11:27.000Z
null
false
7a5acc07f9db8c57080a713c40a3e24484d865c0
[]
[ "license:cc-by-sa-3.0" ]
https://huggingface.co/datasets/n6L3/tabular/resolve/main/README.md
--- license: cc-by-sa-3.0 ---
munggok
null
null
null
false
3
false
munggok/KoPI-CC
2022-09-30T00:10:56.000Z
oscar
false
db57f7c6f265ccf59c03b8c9fb2b32e9a9ca90f5
[]
[ "arxiv:2201.06642", "license:cc", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "language:id", "source_datasets:original", "task_ids:language-modeling" ]
https://huggingface.co/datasets/munggok/KoPI-CC/resolve/main/README.md
--- license: cc annotations_creators: - no-annotation language_creators: - found multilinguality: - monolingual language: - id source_datasets: - original task_categories: - sequence-modeling task_ids: - language-modeling paperswithcode_id: oscar --- ### Dataset Summary KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using [ungoliant](https://github.com/oscar-corpus/ungoliant), each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup ### Preprocessing Each folder name inside snapshots folder denoted preprocessing technique that has been applied . - **Raw** - this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below) - use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta)) - **Dedup** - use data from raw folder - apply cleaning techniques for every text in documents such as - fix html - remove noisy unicode - fix news tag - remove control char - filter by removing short text (20 words) - filter by character ratio occurred inside text such as - min_alphabet_ratio (0.75) - max_upper_ratio (0.10) - max_number_ratio(0.05) - filter by exact dedup technique - hash all text with md5 hashlib - remove non-unique hash - full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main) - **Neardup** - use data from dedup folder - create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config : - use 128 permuation - 6 n-grams size - use word tokenization (split sentence by space) - use 0.8 as similarity score - fillter by removing all index from cluster - full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup) - **Neardup_clean** - use data from neardup folder - Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64). - Removing sentences containing: - Less than 3 words. - A word longer than 1000 characters. - An end symbol not matching end-of-sentence punctuation. - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in indonesia - Removing documents (after sentence filtering): - Containing less than 5 sentences. - Containing less than 500 or more than 50'000 characters. - full code about neardup_clean step adapted from [here](https://gitlab.com/yhavinga/c4nlpreproc) ## Dataset Structure ### Data Instances An example from the dataset: ``` {'text': 'Panitia Kerja (Panja) pembahasan RUU Cipta Kerja (Ciptaker) DPR RI memastikan naskah UU Ciptaker sudah final, tapi masih dalam penyisiran. Penyisiran dilakukan agar isi UU Ciptaker sesuai dengan kesepakatan dalam pembahasan dan tidak ada salah pengetikan (typo).\n"Kan memang sudah diumumkan, naskah final itu sudah. Cuma kita sekarang … DPR itu kan punya waktu 7 hari sebelum naskah resminya kita kirim ke pemerintah. Nah, sekarang itu kita sisir, jangan sampai ada yang salah pengetikan, tapi tidak mengubah substansi," kata Ketua Panja RUU Ciptaker Supratman Andi Agtas saat berbincang dengan detikcom, Jumat (9/10/2020) pukul 10.56 WIB.\nSupratman mengungkapkan Panja RUU Ciptaker menggelar rapat hari ini untuk melakukan penyisiran terhadap naskah UU Ciptaker. Panja, sebut dia, bekerja sama dengan pemerintah dan ahli bahasa untuk melakukan penyisiran naskah.\n"Sebentar, siang saya undang seluruh poksi-poksi (kelompok fraksi) Baleg (Badan Legislasi DPR), anggota Panja itu datang ke Baleg untuk melihat satu per satu, jangan sampai …. Karena kan sekarang ini tim dapur pemerintah dan DPR lagi bekerja bersama dengan ahli bahasa melihat jangan sampai ada yang typo, redundant," terangnya.\nSupratman membenarkan bahwa naskah UU Ciptaker yang final itu sudah beredar. Ketua Baleg DPR itu memastikan penyisiran yang dilakukan tidak mengubah substansi setiap pasal yang telah melalui proses pembahasan.\n"Itu yang sudah dibagikan. Tapi kan itu substansinya yang tidak mungkin akan berubah. Nah, kita pastikan nih dari sisi drafting-nya yang jadi kita pastikan," tutur Supratman.\nLebih lanjut Supratman menjelaskan DPR memiliki waktu 7 hari untuk melakukan penyisiran. Anggota DPR dari Fraksi Gerindra itu memastikan paling lambat Selasa (13/10) pekan depan, naskah UU Ciptaker sudah bisa diakses oleh masyarakat melalui situs DPR.\n"Kita itu, DPR, punya waktu sampai 7 hari kerja. Jadi harusnya hari Selasa sudah final semua, paling lambat. Tapi saya usahakan hari ini bisa final. Kalau sudah final, semua itu langsung bisa diakses di web DPR," terang Supratman.\nDiberitakan sebelumnya, Wakil Ketua Baleg DPR Achmad Baidowi mengakui naskah UU Ciptaker yang telah disahkan di paripurna DPR masih dalam proses pengecekan untuk menghindari kesalahan pengetikan. Anggota Komisi VI DPR itu menyinggung soal salah ketik dalam revisi UU KPK yang disahkan pada 2019.\n"Mengoreksi yang typo itu boleh, asalkan tidak mengubah substansi. Jangan sampai seperti tahun lalu, ada UU salah ketik soal umur \'50 (empat puluh)\', sehingga pemerintah harus mengonfirmasi lagi ke DPR," ucap Baidowi, Kamis (8/10).', 'url': 'https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726', 'timestamp': '2021-10-22T04:09:47Z', 'meta': '{"warc_headers": {"content-length": "2747", "content-type": "text/plain", "warc-date": "2021-10-22T04:09:47Z", "warc-record-id": "<urn:uuid:a5b2cc09-bd2b-4d0e-9e5b-2fcc5fce47cb>", "warc-identified-content-language": "ind,eng", "warc-target-uri": "https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726", "warc-block-digest": "sha1:65AWBDBLS74AGDCGDBNDHBHADOKSXCKV", "warc-type": "conversion", "warc-refers-to": "<urn:uuid:b7ceadba-7120-4e38-927c-a50db21f0d4f>"}, "identification": {"label": "id", "prob": 0.6240405}, "annotations": null, "line_identifications": [null, {"label": "id", "prob": 0.9043896}, null, null, {"label": "id", "prob": 0.87111086}, {"label": "id", "prob": 0.9095224}, {"label": "id", "prob": 0.8579232}, {"label": "id", "prob": 0.81366056}, {"label": "id", "prob": 0.9286813}, {"label": "id", "prob": 0.8435194}, {"label": "id", "prob": 0.8387821}, null]}'} ``` ### Data Fields The data contains the following fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp of extraction as a string - `meta` : json representation of the original from ungoliant tools,can be found [here](https://oscar-corpus.com/post/oscar-v22-01/) (warc_heder) ## Additional Information ### Dataset Curators For inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at [samsulrahmadani@gmail.com](mailto:samsulrahmadani@gmail.com) ### Licensing Information These data are released under this licensing scheme I do not own any of the text from which these data has been extracted. the license actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. I will comply to legitimate requests by removing the affected sources from the next release of the corpus. ### Citation Information ``` @ARTICLE{2022arXiv220106642A, author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t}, title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = 2022, month = jan, eid = {arXiv:2201.06642}, pages = {arXiv:2201.06642}, archivePrefix = {arXiv}, eprint = {2201.06642}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } ```
biglam
null
null
null
false
2
false
biglam/bnl_ground_truth_newspapers_before_1878
2022-08-07T13:16:10.000Z
null
false
64c7044457dd130ae6db88a0a5c386a1c1a6249e
[]
[ "license:cc0-1.0" ]
https://huggingface.co/datasets/biglam/bnl_ground_truth_newspapers_before_1878/resolve/main/README.md
--- license: cc0-1.0 --- ### Dataset description 33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans Text line based OCR 19.000 text lines in Antiqua 14.000 text lines in Fraktur Transcribed using double-keying (99.95% accuracy) Public Domain, CC0 (See copyright notice) Best for training an OCR engine The newspapers used are: - Le Gratis luxembourgeois (1857-1858) - Luxemburger Volks-Freund (1869-1876) - L'Arlequin (1848-1848) - Courrier du Grand-Duché de Luxembourg (1844-1868) - L'Avenir (1868-1871) - Der Wächter an der Sauer (1849-1869) - Luxemburger Zeitung (1844-1845) - Luxemburger Zeitung = Journal de Luxembourg (1858-1859) - Der Volksfreund (1848-1849) - Cäcilia (1862-1871) - Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878) - L'Indépendance luxembourgeoise (1871-1878) - Luxemburger Anzeiger (1856) - L'Union (1860-1871) - Diekircher Wochenblatt (1837-1848) - Das Vaterland (1869-1870) - D'Wäschfra (1868-1878) - Luxemburger Bauernzeitung (1857) - Luxemburger Wort (1848-1878) ### URL for this dataset https://data.bnl.lu/data/historical-newspapers/ ### Dataset format Two JSONL files (antiqua.jsonl.gz and fraktur.jsonl.gz) with the follwing fields: - `font` is either antiqua or fraktur - `img` is the filename of the associated image for the text - `text` is the handcorrected double-keyed text transcribed from the image Sample: ```json { "font": "fraktur", "img": "fraktur-000011.png", "text": "Vidal die Vollmacht für Paris an. Auch" } ``` In addition there are two `.zip` files with the associated images ### Dataset modality Text and associated Images from Scans ### Dataset licence Creative Commons Public Domain Dedication and Certification ### size of dataset 500MB-2GB ### Contact details for data custodian opendata@bnl.etat.lu
luigisaetta
null
null
null
false
10
false
luigisaetta/atco2
2022-08-29T07:36:28.000Z
null
false
2f37090fe26d8da9b59f8403426fa17c69a9f157
[]
[]
https://huggingface.co/datasets/luigisaetta/atco2/resolve/main/README.md
This dataset contains ATC communication. It can be used to fine tune an **ASR** model, specialised for Air Traffic Control Communications (ATC) Its data have been taken from the [ATCO2 site](https://www.atco2.org/data)
Truthful
null
null
null
false
2
false
Truthful/autotrain-data-provision_classification
2022-08-08T05:29:45.000Z
null
false
704867178079f256151dc7d561bb241083f3c0de
[]
[ "task_categories:text-classification" ]
https://huggingface.co/datasets/Truthful/autotrain-data-provision_classification/resolve/main/README.md
--- task_categories: - text-classification --- # AutoTrain Dataset for project: provision_classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project provision_classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Each Partner hereby represents and warrants to the Partnership and each other Partner that (a)\u00a0if such Partner is a corporation, it is duly organized, validly existing, and in good standing under the laws of the jurisdiction of its incorporation and is duly qualified and in good standing as a foreign corporation in the jurisdiction of its principal place of business (if not incorporated therein), (b) if such Partner is a trust, estate or other entity, it is duly formed, validly existing, and (if applicable) in good standing under the laws of the jurisdiction of its formation, and if required by law is duly qualified to do business and (if applicable) in good standing in the jurisdiction of its principal place of business (if not formed therein), (c) such Partner has full corporate, trust, or other applicable right, power and authority to enter into this Agreement and to perform its obligations hereunder and all necessary actions by the board of directors, trustees, beneficiaries, or other Persons necessary for the due authorization, execution, delivery, and performance of this Agreement by such Partner have been duly taken, and such authorization, execution, delivery, and performance do not conflict with any other agreement or arrangement to which such Partner is a party or by which it is bound, and (d)\u00a0such Partner is acquiring its interest in the Partnership for investment purposes and not with a view to distribution thereof.", "target": 13 }, { "text": "This Letter Agreement is binding upon and inures to the benefit of the parties and their respective heirs, executors, administrators, personal representatives, successors, and permitted assigns. This Letter Agreement is personal to you and the availability of you to perform services and the covenants provided by you hereunder have been a material consideration for the Company to enter into this Letter Agreement. Accordingly, you may not assign any of your rights or delegate any of your duties under this Letter Agreement, either voluntarily or by operation of law, without the prior written consent of the Company, which may be given or withheld by the Company in its sole and absolute discretion.", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=19, names=['Assignment', 'Attorney Fees', 'Bankruptcy', 'Change of Control', 'Compliance with Laws', 'Confidentiality', 'Entire Agreement', 'General Definition', 'Governing Law', 'Indemnification', 'Injunctive Relief', 'Jurisdiction and Venue', 'Liens', 'No Warranties', 'Other', 'Permitted Disclosure', 'Survival', 'Term', 'Termination for Convenience'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 119023 | | valid | 13225 |
DeveloperOats
null
null
null
false
60
false
DeveloperOats/DBPedia_Classes
2022-08-08T14:54:42.000Z
null
false
4d0aa96069f24063697e4df63b95be78d3f7fb7d
[]
[ "language:en", "license:cc0-1.0", "multilinguality:monolingual", "size_categories:1M<n<10M", "task_categories:text-classification", "task_ids:topic-classification" ]
https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes/resolve/main/README.md
--- annotations_creators: [] language: - en language_creators: [] license: - cc0-1.0 multilinguality: - monolingual pretty_name: 'DBpedia' size_categories: - 1M<n<10M source_datasets: [] tags: [] task_categories: - text-classification task_ids: - topic-classification --- About Dataset DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in Wikipedia. This is an extract of the data (after cleaning, kernel included) that provides taxonomic, hierarchical categories ("classes") for 342,782 wikipedia articles. There are 3 levels, with 9, 70 and 219 classes respectively. A version of this dataset is a popular baseline for NLP/text classification tasks. This version of the dataset is much tougher, especially if the L2/L3 levels are used as the targets. This is an excellent benchmark for hierarchical multiclass/multilabel text classification. Some example approaches are included as code snippets. Content DBPedia dataset with multiple levels of hierarchy/classes, as a multiclass dataset. Original DBPedia ontology (triplets data): https://wiki.dbpedia.org/develop/datasets Listing of the class tree/taxonomy: http://mappings.dbpedia.org/server/ontology/classes/ Acknowledgements Thanks to the Wikimedia foundation for creating Wikipedia, DBPedia and associated open-data goodness! Thanks to my colleagues at Sparkbeyond (https://www.sparkbeyond.com) for pointing me towards the taxonomical version of this dataset (as opposed to the classic 14 class version) Inspiration Try different NLP models. See also https://www.kaggle.com/datasets/danofer/dbpedia-classes Compare to the SOTA in Text Classification on DBpedia - https://paperswithcode.com/sota/text-classification-on-dbpedia
DeveloperOats
null
null
null
false
4
false
DeveloperOats/Million_News_Headlines
2022-08-08T14:56:01.000Z
null
false
bc91c8c8dbea6a44069e0a955b6ed8dd54fb7fe3
[]
[ "language:en", "license:cc0-1.0", "multilinguality:monolingual", "size_categories:1M<n<10M" ]
https://huggingface.co/datasets/DeveloperOats/Million_News_Headlines/resolve/main/README.md
--- annotations_creators: [] language: - en language_creators: [] license: - cc0-1.0 multilinguality: - monolingual pretty_name: million news headline size_categories: - 1M<n<10M source_datasets: [] tags: [] task_categories: [] task_ids: [] --- About Dataset Context This contains data of news headlines published over a period of nineteen years. Sourced from the reputable Australian news source ABC (Australian Broadcasting Corporation) Agency Site: (http://www.abc.net.au) Content Format: CSV ; Single File publish_date: Date of publishing for the article in yyyyMMdd format headline_text: Text of the headline in Ascii , English , lowercase Start Date: 2003-02-19 ; End Date: 2021-12-31 Inspiration I look at this news dataset as a summarised historical record of noteworthy events in the globe from early-2003 to end-2021 with a more granular focus on Australia. This includes the entire corpus of articles published by the abcnews website in the given date range. With a volume of two hundred articles per day and a good focus on international news, we can be fairly certain that every event of significance has been captured here. Digging into the keywords, one can see all the important episodes shaping the last decade and how they evolved over time. Ex: afghanistan war, financial crisis, multiple elections, ecological disasters, terrorism, famous people, criminal activity et cetera. Similar Work Similar news datasets exploring other attributes, countries and topics can be seen on my profile. Most kernals can be reused with minimal changes across these news datasets. Prepared by Rohit Kulkarni Taken from https://www.kaggle.com/datasets/therohk/million-headlines
jakartaresearch
null
null
This dataset is built as a playground for beginner to make a use case for creating sentiment analysis model.
false
2
false
jakartaresearch/cerpen-corpus
2022-08-08T14:35:40.000Z
null
false
46112e07762195b01e3c3b53e22cfd69e88e61c3
[]
[ "annotations_creators:no-annotation", "language:id", "language_creators:found", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:n<1K", "size_categories:10K<n<100K", "source_datasets:original", "tags:cerpen", "tags:short-story", "task_categories:text-generation", "task_ids:...
https://huggingface.co/datasets/jakartaresearch/cerpen-corpus/resolve/main/README.md
--- annotations_creators: - no-annotation language: - id language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Small Indonesian Short Story Corpus size_categories: - n<1K - 10K<n<100K source_datasets: - original tags: - cerpen - short-story task_categories: - text-generation task_ids: - language-modeling --- # Dataset Card for Cerpen Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a small size for Indonesian short story gathered from the internet. We keep the large size for internal research. if you are interested, please join to [our discord server](https://discord.gg/6v28dq8dRE) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
hkunlp
null
null
null
false
1
false
hkunlp/ds_codegen
2022-08-08T14:51:05.000Z
null
false
177611ed22e1d0e361fcf4b455a677b9ec9a5921
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/hkunlp/ds_codegen/resolve/main/README.md
--- license: apache-2.0 ---
lslattery
null
null
null
false
2
false
lslattery/wafer-defect-detection
2022-08-14T19:53:45.000Z
null
false
903fe28851d02a976db7a3a4bc12b6cfa2f5443c
[]
[]
https://huggingface.co/datasets/lslattery/wafer-defect-detection/resolve/main/README.md
Dataset used by the paper: Wu, Ming-Ju, Jyh-Shing R. Jang, and Jui-Long Chen. “Wafer Map Failure Pattern Recognition and Similarity Ranking for Large-Scale Data Sets.” IEEE Transactions on Semiconductor Manufacturing 28, no. 1 (February 2015): 1–12.
scikit-learn
null
null
null
false
31
false
scikit-learn/churn-prediction
2022-08-08T17:56:29.000Z
null
false
aa09900373d90780ee70d27571775aff0e51569c
[]
[ "license:cc-by-4.0" ]
https://huggingface.co/datasets/scikit-learn/churn-prediction/resolve/main/README.md
--- license: cc-by-4.0 --- Customer churn prediction dataset of a fictional telecommunication company made by IBM Sample Datasets. Context Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs. Content Each row represents a customer, each column contains customer’s attributes described on the column metadata. The data set includes information about: - Customers who left within the last month: the column is called Churn - Services that each customer has signed up for: phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies - Customer account information: how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges - Demographic info about customers: gender, age range, and if they have partners and dependents Credits for the dataset and the card: - [Kaggle](https://www.kaggle.com/datasets/blastchar/telco-customer-churn) - [Latest version of the dataset by IBM Samples team](https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113)
asaxena1990
null
null
null
false
2
false
asaxena1990/Dummy_dataset
2022-09-05T01:29:27.000Z
null
false
5cde9ecee39de419b1a7c5838e86248a8a51ceef
[]
[ "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/asaxena1990/Dummy_dataset/resolve/main/README.md
--- license: cc-by-sa-4.0 --- annotations_creators: - no-annotation language: - en language_creators: - expert-generated license: - cc-by-nc-sa-4.0 multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Massive E-commerce Dataset for Retail and Insurance domain. size_categories: - n<1K source_datasets: - original tags: - chatbots - e-commerce - retail - insurance - consumer - consumer goods task_categories: - question-answering - text-retrieval - text2text-generation - other - translation - conversational task_ids: - extractive-qa - closed-domain-qa - utterance-retrieval - document-retrieval - closed-domain-qa - open-book-qa - closed-book-qa train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction
Chr0my
null
null
null
false
1
false
Chr0my/public_flickr_photos_license_1
2022-08-08T20:39:40.000Z
null
false
ae7ffce08599695beb1a5fe3ba6736ec686abdd6
[]
[ "license:cc-by-nc-sa-3.0" ]
https://huggingface.co/datasets/Chr0my/public_flickr_photos_license_1/resolve/main/README.md
--- license: cc-by-nc-sa-3.0 --- 119893266 photos from flickr (https://www.flickr.com/creativecommons/by-nc-sa-2.0/) --- all photos are under license id = 1 name=Attribution-NonCommercial-ShareAlike License url=https://creativecommons.org/licenses/by-nc-sa/2.0/
hoskinson-center
null
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
false
36
false
hoskinson-center/proof-pile
2022-10-20T17:14:11.000Z
null
false
d2ed9e500d8df9db25a6e5c86139a196d700a22e
[]
[ "annotations_creators:no-annotation", "language:en", "language_creators:found", "multilinguality:monolingual", "tags:math", "tags:mathematics", "tags:formal-mathematics", "task_categories:text-generation", "task_ids:language-modeling" ]
https://huggingface.co/datasets/hoskinson-center/proof-pile/resolve/main/README.md
--- annotations_creators: - no-annotation language: - en language_creators: - found license: [] multilinguality: - monolingual pretty_name: proof-pile size_categories: [] source_datasets: [] tags: - math - mathematics - formal-mathematics task_categories: - text-generation task_ids: - language-modeling --- # Dataset Description The `proof-pile` is a 40GB pre-training dataset of mathematical text that comprises roughly 15 billion tokens. The dataset is composed of diverse sources of both informal and formal mathematics, namely - ArXiv.math (37GB) - Open-source math textbooks (50MB) - Formal mathematics libraries (500MB) - Lean mathlib and other Lean repositories - Isabelle AFP - Coq mathematical components and other Coq repositories - HOL Light - set.mm - Mizar Mathematical Library - Math Overflow and Math Stack Exchange (500MB) - Wiki-style sources (50MB) - ProofWiki - Wikipedia math articles - MATH dataset (6MB) # Supported Tasks This dataset is intended to be used for pre-training language models. We envision models pre-trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization. # Languages All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar. # Configurations The data is sorted into `"arxiv", "books", "formal", "stack-exchange", "wiki",` and `"math-dataset"` configurations. This is so that it is easy to upsample particular configurations during pre-training with the `datasets.interleave_datasets()` function. # Evaluation The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json). The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers. This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs). ## Contributions Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski. We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
arunreddy
null
null
null
false
2
false
arunreddy/Invictus
2022-08-09T02:26:40.000Z
null
false
f6d54c0f3822be12d65526ea4563372048576c1f
[]
[ "license:cc-by-nc-4.0" ]
https://huggingface.co/datasets/arunreddy/Invictus/resolve/main/README.md
--- license: cc-by-nc-4.0 ---
ahadda5
null
null
null
false
1
false
ahadda5/sanad
2022-08-10T07:08:32.000Z
null
false
f656fde795f465d3a4ffae1f78575f4f98f684c9
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ahadda5/sanad/resolve/main/README.md
--- license: apache-2.0 ---
deepklarity
null
null
null
false
2
false
deepklarity/top-flutter-packages
2022-08-09T09:05:39.000Z
null
false
c219bc69317de924709cd566a027284ffc79953f
[]
[ "license:cc" ]
https://huggingface.co/datasets/deepklarity/top-flutter-packages/resolve/main/README.md
--- license: cc --- **Top Flutter Packages Dataset** Flutter is an open source framework by Google for building beautiful, natively compiled, multi-platform applications from a single codebase. It is gaining quite a bit of popularity because of ability to code in a single language and have it running on Android/iOS and web as well. This dataset contains a snapshot of Top 5000+ flutter/dart packages hosted on [Flutter package repository](https://pub.dev/) The dataset was scraped in `July-2022`. We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of the rapidly evolving flutter ecosystem. #### Mantainers: - [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR) - [Keshaw Soni](https://twitter.com/SoniKeshaw) - [Somya Gautam](http://linkedin.in/in/somya-gautam)
deepklarity
null
null
null
false
2
false
deepklarity/top-npm-packages
2022-08-09T09:13:13.000Z
null
false
eb6de1b8c90f77ec0a8cadc297268308367de753
[]
[ "license:cc" ]
https://huggingface.co/datasets/deepklarity/top-npm-packages/resolve/main/README.md
--- license: cc --- **Top NPM Packages Dataset** This dataset contains a snapshot of Top 3000 popular node packages hosted on [Node Package Manager](https://www.npmjs.com/) The dataset was scraped in `July-2022`. This includes a combination of data gathered by [Libraries.io](https://libraries.io/) and [npm](https://www.npmjs.com/) We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of nodejs ecosystem. #### Mantainers: - [Keshaw Soni](https://twitter.com/SoniKeshaw) - [Somya Gautam](http://linkedin.in/in/somya-gautam) - [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR)
deepklarity
null
null
null
false
2
false
deepklarity/indian-premier-league
2022-08-09T09:47:29.000Z
null
false
1555264e93350b2cb253e4dd2ca7596b030cc143
[]
[ "license:cc" ]
https://huggingface.co/datasets/deepklarity/indian-premier-league/resolve/main/README.md
--- license: cc --- **Indian Premier League Dataset** ![z](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOEAAADhCAMAAAAJbSJIAAABtlBMVEUZOYr///8VN4klQY4AIYEALoYAHYAAJYMAH4EALIUaPIwAKoQAI4IAJ4Pg4uwUNokAOYsLMof29/rq7PPd4OsAGn/S1uS0utK/xNjM0eCdpsUAM4wAN4t+irQDMIbp6/IALo7+ygBba6NufKzHzN2qscyPmb25v9X/0ABmdagAEH0AAHikrMmXocIxSpKEj7dGWpqLeWj4rUb9xhIAKJBUZqBDV5jpRnzrUn7iS387PInhOn/aQIBFV30tR5GMlrvAQH88SoFnY3WBc210aXNTVX54b223mU3jtyn/wybpq0S2imBrXnrUrTf6uzD6tD3zpU2gdG/8ok/4mlSOZHexokX9lFjYf2aWmFLgzQbsyAzzjl7ygmJMRIVneW3Exif3emabrkzByiqhgmLYokn0cGpVcXatxj3lY3B4mWWsykCUkFmzUXuLslx5gmbDgWv3W3GXwFd+jWR1V4BZPIatY3rtbnM0UoR4OYX3bnXrXHlyiI7wNnzQWnzoUnyFSYSRNoWqTILV4hlwN4enq0ZSYnfLQoGvQYFhMYK8Nn3AUYvDaZrRkrQ/DHjkw9VpAHCTAGp4ap7QnKtYAAANlklEQVR4nO2c/3/bxBnHrbNlyZFtWbIky3Esu/4S19/TYoc2beMUtkE3KLAytpYvBTYaCNCRkq2FlbZsI20Kg+0/3kmybOnu5MaJv9R53fuH1lak6D567u55nrtHCQQoFAqFQqFQKBQKhUKhUCgUCoVCoVAoFAqFQqFQKBQKhUKhUCgUCoVCoTw/iFLGRATzbsg0AJl8J3/5pZfPnHn5V78W8xlp3g2aLCDTyfzmlVdPnT516oUXrly5cvW3v3stnzk5lhTz+TOvnz59Gqoz9V29evWNN944++ZbK5l5t2wyiJ3Lr5jGO3VqoNAS+Obvr72dPwFmBPnLf7hh67MUXrEVnj0LBV57549LK/Nu4HHJgz9dv3HjlFuhY8Jr195999333l9siWLng5vXoUJoQ3MYWr30hSsDE0KB73240BIjrY8+vnnz+o3Tp1995cxrm/lOp5O5/NKf/3LVMSEU+OEnGws7FkHn1vY2FHj99TObHej/ROugmMnkl95686xjwk8++XRRjSjmP9vZhib8vNXJiN4fQef/tj0KoQn//s0XiylRWvrSFHg7nxdJP85If33HMuHfv/n0q71F7KeZy7u729t3Mr4OD+TfthV++u233y2gESN3d3d3Pmq59AFRkgKiKBkDk2a+NgWaCr/aS8yllccgcgkK/KAz7J9AbHWzRbZayGa7hnMw83Vf4bmtRTNi5tLuzm4rPzwAlhqFtYIUNKR1sdgdWvFvVid9cO7FOTTyOGTu7u585kwwQDQMmA6yoigKqU0ggkhVHVrRMuGDF+/fW6huKrV2d253AJAMw2ClVjXbLgjAlKrqKUEE9XS850hMvP+NZcL7jxdJoZjZ3bnVkaRWsbe62msXu9XlJFPlhEC1qReDa8Ekw5QF52TpC0vhixcWSWHmy51/rFWLxUKQLTT1HGORXi5no9V0udaOM4xcGOT34P2vLIX359niwyAMRhZ0CkKtVtssLpeSzIC2Uq6m4P9ytRaKLS9zwytXvjMVXri/N49mHx5Bz4pRjuOifJhrwc4YZ1wkS8210Br80IhKMqPEgJR1LdEkHtoK//V8d9OYnFPSpVJai8tubUxObxbEMBcRA2qSSddD6ymmtBQAQfe1S98+OA8VPnq+FYY8RrMMly5n1zg+xkp2VAPE5WwWypfDrOG9duW7B+fPX7gwU4Vg7DjYyCrOoJOTSqrXjfGcYHgWRQFbs+y7WmyznmtXvj83W4UgEjXEaIyYD/hjRMPR1norotY/+KwWYy1tohAKc+xQJZfuP4SqZ6U08XC2CkGokVMUuRQce70WwKAFiJf/ebf/cNhgWWG0njT4RUK57zjC3uv2ZqoQGHJKWjLqy0xrTCvadLZv5+1PkWrfYmvOuDOy9oFVby8NbJw7f/HChVmFbdG0Xo9rca2eivNHuDxza7tjfxLXB5NOsP+shIb9fR19dlDhxQsz8hYgyARDWjXIqBITPELe3fl4s9/8sDJQqNXs36TankRWkYsSD0yFM/L4UpVROejViobKFMbfOcnfuZO3P4lrLs9RAWa/lAr2txKHXJU4byo8btMPCWwYz2lZuS3weGd6JiB/s99HA0LK4x2zcCyqKfIwtBVenNVUGmKKNSW7xrTacvjZZyPk7/zg7LQMPINFfBME+FVn5sGenKnwYFYK4XzXqnNCff1InfTfjgkHnsGmHgJ8z/kSQi/bME3odhYAw+ew58de/M/i2ozebpfkQmRsgZnblwbm8YxDpqTVHAsyTfQXJ/ZMhU9cCoMom2brwCZ23IFVwyEuIrmUAvykJefHrJBdbhb4I2zQ5j/PD7/wmquTVhqDb8kw+sQT90yFrsaxDIY5YvgkfnyAHIcxfogbtDqKBcqu0QGMCHuUDWjxh0uuEQb9oe0cNL2ZZMo55z4FA70u8fD8vmcYhmW0cXGzZ3uHNhG9G+43IaZjPzx+6UDmlmdDl0+nc/FSKZVmkpVKw7mNgjpDGHk/vbh/0R3R8NjzT5sOBpmeyWib9iDwzgMW+J3HJXjJM0lKRV2ryLllXU+XGM0xSxfvHSv7+/sXN1wHcGNVYvBwZPkQChmmaLlbtoceT47vGxDAXcQLCEXdDOHhL9crzuMnmBBOpfv7Hl8Rq6CtWzYXroz2oRQyWfNxGEX0sILN4eNLRL5HGopeyaUVPVUeDCyCC0o8hDb0hN1CA21dz4qJquhhH8x+InXRo1godXyEipnpaznXXUgmXHkMFXqO4D2saE5PXgcEkeEAwKdMcxkBntxCj1o9fbKIm9i9qwLhPCjQmxs6WdYQy/QwK/ASr8c4ni/iIk2TS+jBBunexwNw+NNdx+KIxD2ocMNzyAnRh1ghMoiiCs2RBYwaNmxzYTP8RGii4fAEEA1MIlNF+4rZSZH0HuuOjB2OqCSFUKOKhQIwe8ecahtzxBMAvzWc6aLeczb2txATErp31FKINrqvkDBuYSqDRUDFaSjE1xghPc/awcr3+1voCg0Q0WtsV8bnvEcdhfgTgWMOu3d1GoWCXImgkGl6HBM0IXoZEJAr+olcSPEedhRi3decN6Macqx7pHWnZ+DjwbLDMb/yaGsLX4JCZ4m+QrTRA4WocU3fhwVGeFo6CQQssrBwraFubRE2DsPI+Um7Y6ONHihEjWspRPvP0dYOny0RCy1MdGcomiYkrED5TCloo/0V6jE8ucBirgnBEiU68ffGFjbNWAqRebAfUqKNHipEZ5WygAe3wrRKdth1BiepWrdbeUzqo3iuq0RHK8Q8wyqL51rHTi18MVpYQguTBdPxJ+5tbW2QLkFnDs0OmlGz+M+lMMrDEsTpKQxIQYJEc9zDPvqEuMKG9rp+WoCaxVEIsBiUAzNVCKM33PNr9UDiMXEQBvCZw1GINNpRiMWx5pIAloJNU2FA5FH3C1sde0QehASFuh3Moo12FApo8m+GL+jJR1j+HQsVi//bTw58S2hQ1/4MhWj4mzOz0EhztgoDKvKcU/zB04JfxsaRFaILNX2FBho5WREoiyo8ylbaWPCetDatHjyWfcN9VGFqlEIxikxkdjwxe4WBmOtRK7wp0HeBDw3PUoK/QgOdeHN2qsWueg8np68wUBt4uRx38NR68F1y3u2nEDFLvBaJqVXEgkm7mnwuCgcOW4kd/Nj/2CL2Ux+FaMfLZZs66mo1wakimINCZwlT4w9+GtyYKPGQCgmsqoPtlzko7K8kVf5z8PPwxnKLkHofVWEqMJyd56HQDh97vzz1uC9ZxCUeUqG3h2o9I+ZKH+ZiQwAbtfbfnxiEdUziIedSpb6ebeilUklvtLvRKOtJj+aikGVKv/xIiMKxbS/MH/p5C9EQYhzHxZACM0vh7P0hnGr+9/hnhkAJvTmqsDIypiGDKZx21AazwY1HTwgGNEG33HyiNr+4lMjM49LE3r29hIQtZttoSGwTHS+3IDLj3GJjY2/DTCRUdNmPbEQsexqdHxKZdfbk3NZnozruNeKYOT75VsjjmJFC301Ob5bhoxBdp8mNmh5nuooxhLCl2G+s5/5+K1HIWttohbNba/Og+symjKcy+pCriSOdOLZeOiOFvjUxSVWUBtXEPivC6Jr3yKGFrXnPSCFeZuFQXSsWq/3R6Leqjz6eURUy2L7FjBQSCnn6aOtGJNsv+ERXePvdEdswG9VLx3ocEwSNh10YINTs72L67K5hGzCjduaxxxGdzavGeJnFgKqa7c+ofjuk2DbhqDptbLdmaUYK7TxfXl7W9ZR3Pgk3nJJ3M83yYndHrPhg1JYgtlsznR1SDEeh9a/manElpDHVfq/Dq30EYi0GqUhuAPY4ip6TJTbGj19Be3iFGNVWkmk6ZVkitue4SaynGVlegdXT9FSBNcyXWiMxLsx2V/WkNvk6MN/quxR0XvpgssNjO6uHYTVRI0tksP0oudToFSG95Uq/8+iTrwMbVUGpD2uH8enImmPxQiKsoHoI3tNxypOvA4Neiuzxkz3J1WPwNTXzZQaCaUc0Ea9NxBnxgI6MT+SdqnlWkfCwwGoL3sNHFFT6DXg3vSlUuqnEEqIGEm7ghUbWUhTew0cUxY4ILQZMvg4MhInzTA+NGPFisRK5zntE6O0fHg45whsloxFVYoZfjKIn4rX6ik+tPucbpxBq9TEmHQOIIqmOjylgUzaI4Gf5vG/hH9RY/UCW5VxcUbSSbmK+iK0pihKP55Ky+RCP8nLeKMjJ7xppPlvCsN6ZwQ+PuB2IhfhwOMyHQuZb9ANCJrwFO2GBxJhbJu+uHfq9p1E3fGb7Jx2Jk6pp5aUT9MchSf7J2a49EQC0nAASH/eN+OeaKL65qURPkkAWXwnWQov4B8z8ELG8h9Gw1xAXGhVdQko2sEBmoeG8rjAZl+tTK9idC7zbUchMsd4rTSO5nhvAE28XNxm5np1Kcj0vQMQdbxc5ociUy9jfHFhkPOGo+bYMn5rSCzrzwl0wkK6ZR6LdhnGS5hlOY9bUrvWarNyfQKXISRIYiCo5FUgx82XL7jRej5s/QjleE4H5Fxaa01hhfg5gV5nGZoiv6/HavJsyJUTndRLlZMVpLkQuWy43s9UpvRz3PAAMQWCNk5TOUygUCoVCoVAoFAqFQqFQKBQKhUKhUCgUCoVCoVAoFAqFQqFQKBQKhUKhUE4c/wfeJp+nf1nxTgAAAABJRU5ErkJggg==) This dataset contains info on all of the [IPL(Indian Premier League)](https://www.iplt20.com/) cricket matches. Ball-by-Ball level info and scorecard info to be added soon. The dataset was scraped in `July-2022`. #### Mantainers: - [Somya Gautam](http://linkedin.in/in/somya-gautam) - [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR) - [Keshaw Soni](https://twitter.com/SoniKeshaw)
NitishKarra
null
null
null
false
1
false
NitishKarra/Extractoin
2022-08-09T11:20:53.000Z
null
false
8c1b7854c3bcdca5c2346fa285cbdda798d7d5ff
[]
[]
https://huggingface.co/datasets/NitishKarra/Extractoin/resolve/main/README.md
zchflyer
null
null
null
false
1
false
zchflyer/testData
2022-08-09T14:14:07.000Z
null
false
2343fdc383e5333d6f214e452b2801d6602e54ea
[]
[ "license:mit" ]
https://huggingface.co/datasets/zchflyer/testData/resolve/main/README.md
--- license: mit ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705693
2022-08-09T16:51:14.000Z
null
false
c310f2a990aa87b7119122cbbf6b4664c8c5b5b7
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/scitldr" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705693/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/scitldr eval_info: task: summarization model: Blaise-g/longt5_tglobal_large_scitldr metrics: ['bertscore'] dataset_name: Blaise-g/scitldr dataset_config: Blaise-g--scitldr dataset_split: test col_mapping: text: source target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: Blaise-g/scitldr * Config: Blaise-g--scitldr * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705694
2022-08-09T16:02:45.000Z
null
false
506292df69a01a71aa75ff0fcdd162eba2120920
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:Blaise-g/scitldr" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705694/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - Blaise-g/scitldr eval_info: task: summarization model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr metrics: ['bertscore'] dataset_name: Blaise-g/scitldr dataset_config: Blaise-g--scitldr dataset_split: test col_mapping: text: source target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: Blaise-g/scitldr * Config: Blaise-g--scitldr * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
autoevaluate
null
null
null
false
2
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-1bafd1c4-12715695
2022-08-09T16:13:13.000Z
null
false
f48e06e7e27a3e222fe5923a930ccdf2d3fd9eee
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-1bafd1c4-12715695/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sshleifer/distilbart-xsum-12-6 metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grappler](https://huggingface.co/grappler) for evaluating this model.
NX2411
null
null
null
false
4
false
NX2411/AIhub-korean-speech-data-large-no-lm
2022-08-09T17:25:15.000Z
null
false
dad96f0a811f89a30ac40d27161ef0ed609f3c49
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/NX2411/AIhub-korean-speech-data-large-no-lm/resolve/main/README.md
--- license: apache-2.0 ---
BlueAquilae
null
null
null
false
1
false
BlueAquilae/agro
2022-08-09T20:43:51.000Z
null
false
0825b7991240aa91ef61186ca7dab49f9df91c49
[]
[ "license:lgpl-3.0" ]
https://huggingface.co/datasets/BlueAquilae/agro/resolve/main/README.md
--- license: lgpl-3.0 ---
benjaminaw93
null
null
null
false
2
false
benjaminaw93/test
2022-08-10T01:12:14.000Z
null
false
73073e5fa4f784efc4a6568dbf2b088cdf27277c
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/benjaminaw93/test/resolve/main/README.md
--- license: apache-2.0 ---
nateraw
null
null
null
false
1
false
nateraw/soundcamp-snares
2022-08-10T03:49:07.000Z
null
false
2c2c43f7ba7358e95de10283386e1e5670076214
[]
[ "license:unknown" ]
https://huggingface.co/datasets/nateraw/soundcamp-snares/resolve/main/README.md
--- license: unknown ---
SLPL
null
null
null
false
1
false
SLPL/syntran-fa
2022-11-03T06:34:17.000Z
null
false
1cc485463d5c2c6c6e3ef239239eb9857e6bebb2
[]
[ "language:fa", "license:mit", "multilinguality:monolingual", "size_categories:30k<n<50k", "task_categories:question-answering", "task_categories:text2text-generation", "task_categories:text-generation", "tags:conditional-text-generation", "tags:conversational-question-answering" ]
https://huggingface.co/datasets/SLPL/syntran-fa/resolve/main/README.md
--- language: - fa license: mit multilinguality: - monolingual size_categories: - 30k<n<50k task_categories: - question-answering - text2text-generation - text-generation task_ids: [] pretty_name: SynTranFa tags: - conditional-text-generation - conversational-question-answering --- # SynTran-fa Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below: ```python import datasets data = datasets.load_dataset('SLPL/syntran-fa', split="train") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL) - **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer). This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data). The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task). ### Supported Tasks and Leaderboards This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf). ### Languages + Persian (fa) ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'id': 0, 'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟', 'short_answer': 'باشگاه هاکی ساوتهمپتون', 'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.', 'bert_loss': 1.110097069682014 } ``` + `id` : the entry id in dataset + `question` : the question + `short_answer` : the short answer corresponding to the `question` (the primary answer) + `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer) + `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent. Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent. ### Data Splits Currently, the dataset just provided the `train` split. There would be a `test` split soon. ## Dataset Creation ### Source Data The source datasets that we used are as follows: + [PersianQA](https://github.com/sajjjadayobi/PersianQA) + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745) #### Initial Data Collection and Normalization We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that. ## Additional Information ### Dataset Curators The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
UCL-DARK
null
TBC
TODO
false
619
false
UCL-DARK/ludwig
2022-08-11T15:51:56.000Z
null
false
8bea8b3d7a39664cd7827474342599b6ab016991
[]
[ "annotations_creators:expert-generated", "language:en", "language_creators:expert-generated", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "tags:implicature", "tags:pragmatics", "tags:language", "tags:llm", "tags:conversation", "tags...
https://huggingface.co/datasets/UCL-DARK/ludwig/resolve/main/README.md
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: ludwig size_categories: - n<1K source_datasets: - original tags: - implicature - pragmatics - language - llm - conversation - dialogue task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling --- # Dataset Card for LUDWIG ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository: https://github.com/ucl-dark/ludwig** - **Paper: TODO** - **Leaderboard: TODO** - **Point of Contact: Laura Ruis** ### Dataset Summary LUDWIG (**L**anguage **U**nderstanding **W**ith **I**mplied meanin**G**) is a dataset containing English conversational implicatures. Implicature is the act of meaning or implying one thing by saying something else. There's different types of implicatures, from simple ones like "Some guests came to the party" (implying not all guests came) to more complicated implicatures that depend on context like "A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully understand utterances without understanding their implications. The implicatures in this dataset are conversational because they come in utterance-response tuples. Each tuple has an implicature associated with it, which is the implied meaning of the response. For example: Utterance: Are you going to the party this Friday? Response: There's a global pandemic. Implicature: No. This dataset can be used to evaluate language models on their pragmatic language understanding. ### Supported Tasks and Leaderboards - ```text-generation```: The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). - ```fill-mask```: The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). ### Languages English ## Dataset Structure ### Data Instances Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example). ``` { "id": 1, "utterance": "Are you going to the party this Friday?", "response": "There's a global pandemic.", "implicature": "No.", "incoherent_implicature": "Yes". "prompts": [ { "utterance": "Was that hot?", "response": "The sun was scorching.", "implicature": "Yes.", "incoherent_implicature": "No.". } ] } ``` ### Data Fields ``` { "id": int, # unique identifier of data points "utterance": str, # the utterance in this example "response": str, # the response in this example "implicature": str, # the implied meaning of the response, e.g. 'yes' "incoherent_implicature": str, # the wrong implied meaning, e.g. 'no' "prompts": [ # optional: prompt examples from the validation set { "utterance": str, "response": str, "implicature": str, "incoherent_implicature": str, } ] } ``` ### Data Splits **Validation**: 118 instances that can be used for finetuning or few-shot learning **Test**: 600 instances that can be used for evaluating models. NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added by @LauraRuis. ## Dataset Creation ### Curation Rationale Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field. We want computational models of language to understand all the speakers implications. ### Source Data #### Initial Data Collection and Normalization "Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020. [Link to paper](https://doi.org/10.1016/j.procs.2020.04.251) #### Who are the source language producers? These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually. ### Annotations #### Annotation process Manually annotated by dataset collectors. #### Who are the annotators? Authors of the original paper. ### Personal and Sensitive Information All the data is public and not sensitive. ## Considerations for Using the Data ### Social Impact of Dataset Any application that requires communicating with humans requires pragmatic language understanding. ### Discussion of Biases Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone. Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language. ### Other Known Limitations None yet. ## Additional Information ### Dataset Curators Elizabeth Jasmi George and Radhika Mamidi ### Licensing Information [license](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{George:Mamidi:2020, author = {George, Elizabeth Jasmi and Mamidi, Radhika}, doi = {10.1016/j.procs.2020.04.251}, journal = {Procedia Computer Science}, keywords = {}, note = {https://doi.org/10.1016/j.procs.2020.04.251}, number = {}, pages = {2316-2323}, title = {Conversational implicatures in English dialogue: Annotated dataset}, url = {https://app.dimensions.ai/details/publication/pub.1128198497}, volume = {171}, year = {2020} } ``` ### Contributions Thanks to [@LauraRuis](https://github.com/LauraRuis) for adding this dataset.
yuan1729
null
null
null
false
2
false
yuan1729/YuAN-001
2022-08-10T09:54:37.000Z
null
false
ec949ded2c09f0eb3a75779088bba0b445d1edf8
[]
[ "license:mit" ]
https://huggingface.co/datasets/yuan1729/YuAN-001/resolve/main/README.md
--- license: mit ---
valurank
null
null
null
false
4
false
valurank/News_headlines
2022-08-17T08:19:18.000Z
null
false
fe832a80cc04621645f721f68baa80783bf88486
[]
[ "license:other" ]
https://huggingface.co/datasets/valurank/News_headlines/resolve/main/README.md
--- license: other ---
ChristophSchuhmann
null
null
null
false
2
false
ChristophSchuhmann/test-files
2022-08-10T10:34:27.000Z
null
false
aea0b7d088a9d02b09262cdb55c9d1208efb48b3
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ChristophSchuhmann/test-files/resolve/main/README.md
--- license: apache-2.0 ---
mariosasko
null
null
null
false
1
false
mariosasko/sql
2022-08-17T17:13:22.000Z
null
false
d8dc8dc5ba9e7f44b4974590c26f62f345a47f56
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/mariosasko/sql/resolve/main/README.md
--- license: apache-2.0 viewer: false --- ### Usage ```python from datasets import load_dataset # Load everything into "train" set ## Using a query dset = load_dataset("mariosasko/sql", sql="SELECT * FROM my_dataset LIMIT 10", con="sqlite:///my_db.db") ## Referencing a table dset = load_dataset("mariosasko/sql", sql="data_table", con="postgres:///db_name") # Load multiple splits dset = load_dataset( "mariosasko/sql", sql={ "train": "SELECT * FROM my_dataset LIMIT 10", "test": "SELECT * FROM my_dataset LIMIT 10 OFFSET 10", }, con="sqlite:///my_db.db" ) ``` `sql` and `con` can only be specified as strings to work with `datasets`' caching mechanism. `pd.read_sql` is used internally for query processing, so refer to its [doc](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) for the complete list of supported parameters.
USC-MOLA-Lab
null
null
null
false
68
false
USC-MOLA-Lab/MFRC
2022-08-26T00:36:03.000Z
null
false
7f5939deef9875465d3ff70ab0102ef957f4f352
[]
[ "arxiv:2208.05545" ]
https://huggingface.co/datasets/USC-MOLA-Lab/MFRC/resolve/main/README.md
# Dataset Card for MFRC ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Reddit posts annotated for moral foundations ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - text - subreddit - bucket - annotator - annotation - confidence ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information cc-by-4.0 ### Citation Information ```bibtex @misc{trager2022moral, title={The Moral Foundations Reddit Corpus}, author={Jackson Trager and Alireza S. Ziabari and Aida Mostafazadeh Davani and Preni Golazazian and Farzan Karimi-Malekabadi and Ali Omrani and Zhihe Li and Brendan Kennedy and Nils Karl Reimer and Melissa Reyes and Kelsey Cheng and Mellow Wei and Christina Merrifield and Arta Khosravi and Evans Alvarez and Morteza Dehghani}, year={2022}, eprint={2208.05545}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions
CShorten
null
null
null
false
2
false
CShorten/CORD-19-prototype
2022-08-11T20:49:11.000Z
null
false
2328dd311177458120231b103ac782bae1844c0a
[]
[]
https://huggingface.co/datasets/CShorten/CORD-19-prototype/resolve/main/README.md
Subset of CORD-19 for rapid prototyping of ideas in vector encodings and Weaviate.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-multi_news-416d7689-12805701
2022-08-10T19:11:52.000Z
null
false
ffcbba9a8234249d2f89c0e828415cbc81d52428
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:multi_news" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-multi_news-416d7689-12805701/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - multi_news eval_info: task: summarization model: datien228/distilbart-cnn-12-6-ftn-multi_news metrics: [] dataset_name: multi_news dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: datien228/distilbart-cnn-12-6-ftn-multi_news * Dataset: multi_news * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ccdv](https://huggingface.co/ccdv) for evaluating this model.
Meowren
null
null
null
false
1
false
Meowren/CapBot
2022-08-10T18:58:02.000Z
null
false
306b7c093b3d71163e9e7d0a6f4bd572fefca7ae
[]
[]
https://huggingface.co/datasets/Meowren/CapBot/resolve/main/README.md
'Conversational bot'
google
null
@inproceedings{jia2022cvss, title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation}, author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga}, booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)}, pages={6691--6703}, year={2022} }
CVSS is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English.
false
6
false
google/cvss
2022-08-27T23:19:14.000Z
null
false
206b001828fb8532e569701519dac19d048fbf09
[]
[ "arxiv:2201.03713", "license:cc-by-4.0" ]
https://huggingface.co/datasets/google/cvss/resolve/main/README.md
--- license: cc-by-4.0 --- # CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus *CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus. CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values: - *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications. - *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages. Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech. In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation. Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets. # Load the data The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names. ```py from datasets import load_dataset # Load only ar-en and ja-en language pairs. Omitting the `languages` argument # would load all the language pairs. cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja']) # Print the structure of the dataset. print(cvss_c) ``` # License CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. ## Citation Please cite this paper when referencing the CVSS corpus: ``` @inproceedings{jia2022cvss, title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation}, author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga}, booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)}, pages={6691--6703}, year={2022} } ```
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-00961196-12825703
2022-08-11T16:19:41.000Z
null
false
5e3c3e47fc4b7946b1475ac53a45f83fc6430ba7
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-00961196-12825703/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum-cnn metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835704
2022-08-11T16:34:44.000Z
null
false
5c0489317b6d18b9e69c837e2940f2033b7fd0d7
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835704/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835705
2022-08-11T16:24:08.000Z
null
false
973997ff4b661d0de5320aef3345d5b4b66ad482
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835705/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: t5-large metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-large * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835706
2022-08-11T06:07:30.000Z
null
false
b1b725d70e20a37d1d94aa41d0c22a0fe4c3245a
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-2bf8ffdd-12835706/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: t5-base metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845708
2022-08-11T03:07:02.000Z
null
false
07a8b5711578956e3962668341e696c23b4afba8
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845708/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845709
2022-08-11T03:16:03.000Z
null
false
8b3718ab8d417b60b0841465810b4e9cc062d710
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845709/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-cnn metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
12
false
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845710
2022-08-11T03:07:06.000Z
null
false
97af091b1c1eeae4c0f48d669716625ccd78c2c6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845710/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum-cnn metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
planhanasan
null
null
null
false
1
false
planhanasan/github-issues
2022-08-11T04:22:30.000Z
null
false
4b84d943bd01791746753c43d65d04d4bd72c098
[]
[ "arxiv:2005.00614" ]
https://huggingface.co/datasets/planhanasan/github-issues/resolve/main/README.md
# Dataset Card for GitHub Issues ## Dataset Description - **Point of Contact:** [Lewis Tunstall](lewis@huggingface.co) ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ### Languages Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,... When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { 'example_field': ..., ... } ``` Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `example_field`: description of `example_field` Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions. ### Data Splits Describe and name the splits in the dataset if there are more than one. Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together? ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process. If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name). If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used. #### Who are the source language producers? State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data. If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. Describe other people represented or mentioned in the data. Where possible, link to references for the information. ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes. #### Who are the annotators? If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated. Describe the people or systems who originally created the annotations and their selection criteria if applicable. If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. ### Personal and Sensitive Information State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data). State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). If efforts were made to anonymize the data, describe the anonymization process. ## Considerations for Using the Data ### Social Impact of Dataset Please discuss some of the ways you believe the use of this dataset will impact society. The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations. Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here. ### Discussion of Biases Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. If analyses have been run quantifying these biases, please add brief summaries and links to the studies here. ### Other Known Limitations If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here. ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Provide the license and link to the license webpage if available. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ``` If the dataset has a [DOI](https://www.doi.org/), please provide it here. ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
autoevaluate
null
null
null
false
6
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855711
2022-08-11T19:55:34.000Z
null
false
e7d454b3ca32b66e7d270a2c766c42f5f5f70b46
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855711/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
6
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855712
2022-08-11T20:04:47.000Z
null
false
6f7358a3b383aea6d10788b8a63cd814e028f64b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855712/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: sysresearch101/t5-large-finetuned-xsum-cnn metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855713
2022-08-11T09:41:15.000Z
null
false
e404fa8894ce2092f89eae86da115760db88574f
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855713/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: t5-base metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855714
2022-08-11T19:57:13.000Z
null
false
d6e0e001bba9b14661345a9575ca7f11609a3b59
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855714/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: t5-large metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: train col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-large * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
BigBang
null
null
null
false
17
false
BigBang/rosetta_old
2022-08-25T08:36:05.000Z
null
false
c9cf33cf2552490371e7694b1b8ffa8685cc7ba4
[]
[ "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/BigBang/rosetta_old/resolve/main/README.md
--- license: - cc-by-sa-4.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875715
2022-08-11T13:04:30.000Z
null
false
44c960b81b39ddf04b08a9a23f451c23a30ea8b5
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875715/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: facebook/bart-large-cnn metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875716
2022-08-11T12:35:14.000Z
null
false
060d4151a9bed0e17f02cf8713bbb080109b6c2b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875716/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: csebuetnlp/mT5_multilingual_XLSum metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: csebuetnlp/mT5_multilingual_XLSum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875717
2022-08-11T13:46:48.000Z
null
false
1312ec1d0f1935bb84c3e1471dbcac70b82944fd
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:cnn_dailymail" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875717/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: google/pegasus-cnn_dailymail metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
ChristophSchuhmann
null
null
null
false
1,160
false
ChristophSchuhmann/improved_aesthetics_5plus
2022-08-11T12:46:57.000Z
null
false
3995ba969730f1dc7142a26a34d0b192763272a9
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_5plus/resolve/main/README.md
--- license: apache-2.0 ---
hf-internal-testing
null
null
null
false
6
false
hf-internal-testing/spaghetti-video-8-frames
2022-08-25T16:00:38.000Z
null
false
00e13174de84f6892fa7cdbcb030757504ee11d0
[]
[]
https://huggingface.co/datasets/hf-internal-testing/spaghetti-video-8-frames/resolve/main/README.md
--- --- This is the code that was used to generate this video: ``` from decord import VideoReader, cpu from huggingface_hub import hf_hub_download import numpy as np np.random.seed(0) def sample_frame_indices(clip_len, frame_sample_rate, seg_len): converted_len = int(clip_len * frame_sample_rate) end_idx = np.random.randint(converted_len, seg_len) start_idx = end_idx - converted_len indices = np.linspace(start_idx, end_idx, num=clip_len) indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) return indices file_path = hf_hub_download( repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ) vr = VideoReader(file_path, num_threads=1, ctx=cpu(0)) # sample 8 frames vr.seek(0) indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=len(vr)) buffer = vr.get_batch(indices).asnumpy() # create a list of NumPy arrays video = [buffer[i] for i in range(buffer.shape[0])] video_numpy = np.array(video) with open('spaghetti_video_8_frames.npy', 'wb') as f: np.save(f, video_numpy) ```
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-5cb1ece5-12895721
2022-08-11T15:29:15.000Z
null
false
5ae360e13ed6372f2c5fe799bb2c4f0799b4ac50
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-5cb1ece5-12895721/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/bigbird-pegasus-large-arxiv metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate
null
null
null
false
6
false
autoevaluate/autoeval-staging-eval-project-xsum-4ce7da77-12905722
2022-08-11T15:31:22.000Z
null
false
403c0e9b0f0c46a9cf2579124b06c47d3c08db61
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-4ce7da77-12905722/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/bigbird-pegasus-large-arxiv metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate
null
null
null
false
2
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915723
2022-08-11T13:28:11.000Z
null
false
bc903c85ac42397037b91bef89142243c7b4d7b6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915723/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-cnn metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915724
2022-08-11T13:18:29.000Z
null
false
3f3a3a357a6531c4e6127b8247aaa85fc8d26729
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915724/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-xsum metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915725
2022-08-11T13:20:52.000Z
null
false
d06a1f8d090c853b1122c540a6ff6d2b16c10d12
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915725/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-cnn-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915726
2022-08-11T13:10:53.000Z
null
false
70986fc57830f32608141c7f2278093ebd811903
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915726/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-xsum-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
juletxara
null
@article{Liu2022VisualSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier}, journal={ArXiv}, year={2022}, volume={abs/2205.00363} }
The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False).
false
1
false
juletxara/visual-spatial-reasoning
2022-08-11T20:11:21.000Z
null
false
a07bec7a6b1cbf4b5ca3a68bf744e854982b72bd
[]
[ "arxiv:2205.00363", "arxiv:1908.03557", "arxiv:1908.07490", "arxiv:2102.03334", "annotations_creators:crowdsourced", "language:en", "language_creators:machine-generated", "license:apache-2.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_catego...
https://huggingface.co/datasets/juletxara/visual-spatial-reasoning/resolve/main/README.md
--- annotations_creators: - crowdsourced language: - en language_creators: - machine-generated license: - apache-2.0 multilinguality: - monolingual pretty_name: Visual Spatial Reasoning size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - image-classification task_ids: [] --- # Dataset Card for Visual Spatial Reasoning ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ltl.mmll.cam.ac.uk/ - **Repository:** https://github.com/cambridgeltl/visual-spatial-reasoning - **Paper:** https://arxiv.org/abs/2205.00363 - **Leaderboard:** https://paperswithcode.com/sota/visual-reasoning-on-vsr - **Point of Contact:** https://ltl.mmll.cam.ac.uk/ ### Dataset Summary The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). ### Supported Tasks and Leaderboards We test three baselines, all supported in huggingface. They are VisualBERT [(Li et al. 2019)](https://arxiv.org/abs/1908.03557), LXMERT [(Tan and Bansal, 2019)](https://arxiv.org/abs/1908.07490) and ViLT [(Kim et al. 2021)](https://arxiv.org/abs/2102.03334). The leaderboard can be checked at [Papers With Code](https://paperswithcode.com/sota/visual-reasoning-on-vsr). model | random split | zero-shot :-------------|:-------------:|:-------------: *human* | *95.4* | *95.4* VisualBERT | 57.4 | 54.0 LXMERT | **72.5** | **63.2** ViLT | 71.0 | 62.4 ### Languages The language in the dataset is English as spoken by the annotators. The BCP-47 code for English is en. [`meta_data.csv`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data/data_files/meta_data.jsonl) contains meta data of annotators. ## Dataset Structure ### Data Instances Each line is an individual data point. Each `jsonl` file is of the following format: ```json {"image": "000000050403.jpg", "image_link": "http://images.cocodataset.org/train2017/000000050403.jpg", "caption": "The teddy bear is in front of the person.", "label": 1, "relation": "in front of", "annotator_id": 31, "vote_true_validator_id": [2, 6], "vote_false_validator_id": []} {"image": "000000401552.jpg", "image_link": "http://images.cocodataset.org/train2017/000000401552.jpg", "caption": "The umbrella is far away from the motorcycle.", "label": 0, "relation": "far away from", "annotator_id": 2, "vote_true_validator_id": [], "vote_false_validator_id": [2, 9, 1]} ``` ### Data Fields `image` denotes name of the image in COCO and `image_link` points to the image on the COCO server (so you can also access directly). `caption` is self-explanatory. `label` being `0` and `1` corresponds to False and True respectively. `relation` records the spatial relation used. `annotator_id` points to the annotator who originally wrote the caption. `vote_true_validator_id` and `vote_false_validator_id` are annotators who voted True or False in the second phase validation. ### Data Splits The VSR corpus, after validation, contains 10,119 data points with high agreement. On top of these, we create two splits (1) random split and (2) zero-shot split. For random split, we randomly split all data points into train, development, and test sets. Zero-shot split makes sure that train, development and test sets have no overlap of concepts (i.e., if *dog* is in test set, it is not used for training and development). Below are some basic statistics of the two splits. split | train | dev | test | total :------|:--------:|:--------:|:--------:|:--------: random | 7,083 | 1,012 | 2,024 | 10,119 zero-shot | 5,440 | 259 | 731 | 6,430 Check out [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for more details. ## Dataset Creation ### Curation Rationale Understanding spatial relations is fundamental to achieve intelligence. Existing vision-language reasoning datasets are great but they compose multiple types of challenges and can thus conflate different sources of error. The VSR corpus focuses specifically on spatial relations so we can have accurate diagnosis and maximum interpretability. ### Source Data #### Initial Data Collection and Normalization **Image pair sampling.** MS COCO 2017 contains 123,287 images and has labelled the segmentation and classes of 886,284 instances (individual objects). Leveraging the segmentation, we first randomly select two concepts, then retrieve all images containing the two concepts in COCO 2017 (train and validation sets). Then images that contain multiple instances of any of the concept are filtered out to avoid referencing ambiguity. For the single-instance images, we also filter out any of the images with instance area size < 30, 000, to prevent extremely small instances. After these filtering steps, we randomly sample a pair in the remaining images. We repeat such process to obtain a large number of individual image pairs for caption generation. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process **Fill in the blank: template-based caption generation.** Given a pair of images, the annotator needs to come up with a valid caption that makes it correctly describing one image but incorrect for the other. In this way, the annotator could focus on the key difference of the two images (which should be spatial relation of the two objects of interest) and come up with challenging relation that differentiates the two. Similar paradigms are also used in the annotation of previous vision-language reasoning datasets such as NLVR2 (Suhr et al., 2017, 2019) and MaRVL (Liu et al., 2021). To regularise annotators from writing modifiers and differentiating the image pair with things beyond accurate spatial relations, we opt for a template-based classification task instead of free-form caption writing. Besides, the template-generated dataset can be easily categorised based on relations and their meta-categories. The caption template has the format of “The `OBJ1` (is) __ the `OBJ2`.”, and the annotators are instructed to select a relation from a fixed set to fill in the slot. The copula “is” can be omitted for grammaticality. For example, for “contains”, “consists of”, and “has as a part”, “is” should be discarded in the template when extracting the final caption. The fixed set of spatial relations enable us to obtain the full control of the generation process. The full list of used relations are listed in the table below. It contains 71 spatial relations and is adapted from the summarised relation table of Fagundes et al. (2021). We made minor changes to filter out clearly unusable relations, made relation names grammatical under our template, and reduced repeated relations. In our final dataset, 65 out of the 71 available relations are actually included (the other 6 are either not selected by annotators or are selected but the captions did not pass the validation phase). | Category | Spatial Relations | |-------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Adjacency | Adjacent to, alongside, at the side of, at the right side of, at the left side of, attached to, at the back of, ahead of, against, at the edge of | | Directional | Off, past, toward, down, deep down*, up*, away from, along, around, from*, into, to*, across, across from, through*, down from | | Orientation | Facing, facing away from, parallel to, perpendicular to | | Projective | On top of, beneath, beside, behind, left of, right of, under, in front of, below, above, over, in the middle of | | Proximity | By, close to, near, far from, far away from | | Topological | Connected to, detached from, has as a part, part of, contains, within, at, on, in, with, surrounding, among, consists of, out of, between, inside, outside, touching | | Unallocated | Beyond, next to, opposite to, after*, among, enclosed by | **Second-round Human Validation.** Every annotated data point is reviewed by at least two additional human annotators (validators). In validation, given a data point (consists of an image and a caption), the validator gives either a True or False label. We exclude data points that have < 2/3 validators agreeing with the original label. In the guideline, we communicated to the validators that, for relations such as “left”/“right”, “in front of”/“behind”, they should tolerate different reference frame: i.e., if the caption is true from either the object’s or the viewer’s reference, it should be given a True label. Only when the caption is incorrect under all reference frames, a False label is assigned. This adds difficulty to the models since they could not naively rely on relative locations of the objects in the images but also need to correctly identify orientations of objects to make the best judgement. #### Who are the annotators? Annotators are hired from [prolific.co](https://prolific.co). We require them (1) have at least a bachelor’s degree, (2) are fluent in English or native speaker, and (3) have a >99% historical approval rate on the platform. All annotators are paid with an hourly salary of 12 GBP. Prolific takes an extra 33% of service charge and 20% VAT on the service charge. For caption generation, we release the task with batches of 200 instances and the annotator is required to finish a batch in 80 minutes. An annotator cannot take more than one batch per day. In this way we have a diverse set of annotators and can also prevent annotators from being fatigued. For second round validation, we group 500 data points in one batch and an annotator is asked to label each batch in 90 minutes. In total, 24 annotators participated in caption generation and 26 participated in validation. The annotators have diverse demographic background: they were born in 13 different countries; live in 13 different couturiers; and have 14 different nationalities. 57.4% of the annotators identify themselves as females and 42.6% as males. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This project is licensed under the [Apache-2.0 License](https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/LICENSE). ### Citation Information ```bibtex @article{Liu2022VisualSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier}, journal={ArXiv}, year={2022}, volume={abs/2205.00363} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915727
2022-08-11T16:01:35.000Z
null
false
c9ed41cbd1ee3f0275c4c4f0be802dc5864314b1
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915727/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-large metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915728
2022-08-11T13:45:26.000Z
null
false
d45ad40b7ef5fb1aabfc89408a6269ff6ecd9fbc
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915728/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-xsum metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
6
false
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915729
2022-08-11T14:48:21.000Z
null
false
b137984a923a7f937710ac41d0a97f7d68eb0175
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915729/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-cnn_dailymail metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925730
2022-08-11T14:02:39.000Z
null
false
3947e8559380f35ad1d92cad0266367c924c3888
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925730/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-cnn metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925731
2022-08-11T14:00:18.000Z
null
false
544729e978e5120ece94dc40d9eba44bf865e748
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925731/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-xsum metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925732
2022-08-11T14:19:44.000Z
null
false
5e3f25e9deec3aac79ff0edee782423f8dba814d
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925732/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-cnn-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925733
2022-08-11T14:11:20.000Z
null
false
975a6926fa9fd2087ea7a397f74b579d6b22d723
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925733/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-xsum-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
ChristophSchuhmann
null
null
null
false
1
false
ChristophSchuhmann/improved_aesthetics_4.75plus
2022-08-13T18:16:44.000Z
null
false
e13a583aceced0b410e425156fef5f9387827936
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_4.75plus/resolve/main/README.md
--- license: apache-2.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925734
2022-08-11T16:59:36.000Z
null
false
29784d9e5a9d2813d3a8df4b5da15a3a5b5a2f4c
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925734/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-large metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925735
2022-08-11T14:37:37.000Z
null
false
4d6f83691af8dd7cea05a532a49d275462449670
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925735/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-xsum metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925736
2022-08-11T15:41:05.000Z
null
false
48948a18fba7481186adc4ee477fe180bced55dc
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925736/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-cnn_dailymail metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935737
2022-08-11T15:01:40.000Z
null
false
3ebf510b9434206dfaaf35567ba531dcd70a4f99
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935737/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-cnn metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
6
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935738
2022-08-11T15:09:17.000Z
null
false
9dc58c7fae34f20dc3761b45eecfabd787f9f5dd
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935738/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: facebook/bart-large-xsum metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935739
2022-08-11T15:23:03.000Z
null
false
288023970a01b31e96633b3ed3c93edd1609f493
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935739/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-cnn-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935740
2022-08-11T15:26:20.000Z
null
false
3705d8c1c5f58d29160f8e72eeb0cc27b3b15ac9
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935740/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-xsum-12-6 metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935741
2022-08-11T18:08:26.000Z
null
false
79be53a8ffd3f2b6062c431560cd95b332e6de0d
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935741/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-large metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935743
2022-08-11T16:55:47.000Z
null
false
80853eab2ea846199ff76c3e6353951583bd6baf
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935743/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-cnn_dailymail metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975766
2022-08-11T17:34:59.000Z
null
false
00351121bd85b3ae5629274cabb72e73a17a782d
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:xsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975766/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: sshleifer/distilbart-xsum-12-6 metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.