author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jan1 | null | null | null | false | null | false | Jan1/peppa | 2022-10-09T21:27:09.000Z | null | false | 0b6d8290c2f90626192427dfeff9af7e53800bd4 | [] | [
"license:other"
] | https://huggingface.co/datasets/Jan1/peppa/resolve/main/README.md | ---
license: other
---
|
Chinchis | null | null | null | false | null | false | Chinchis/imagenes | 2022-10-13T05:44:07.000Z | null | false | b59f33a77abdc4b7b5c11d67685e8c8d43ce2307 | [] | [
"license:gpl"
] | https://huggingface.co/datasets/Chinchis/imagenes/resolve/main/README.md | ---
license: gpl
---
|
nerdie01 | null | null | null | false | null | false | nerdie01/emotions-modified | 2022-10-10T00:56:49.000Z | null | false | 1fa9ca6910a87c20259ae78e09ffec3738d5194c | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/nerdie01/emotions-modified/resolve/main/README.md | ---
license: apache-2.0
---
|
liuwei33 | null | null | null | false | null | false | liuwei33/images | 2022-11-08T04:54:33.000Z | null | false | 3d110698d790fc885cc2d4a8dbac8f377f7c571e | [] | [
"license:mit"
] | https://huggingface.co/datasets/liuwei33/images/resolve/main/README.md | ---
license: mit
---
|
Bioskop | null | null | null | false | 8 | false | Bioskop/BeccaCP | 2022-10-10T01:52:28.000Z | null | false | 266a789657f551170b540c38555a03be58b55650 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Bioskop/BeccaCP/resolve/main/README.md | ---
license: unknown
---
|
Bioskop | null | null | null | false | null | false | Bioskop/BeccaER | 2022-10-10T02:24:30.000Z | null | false | 7a0de57544433aedf02f1e597bf2ac01bc4b8d7b | [] | [
"license:other"
] | https://huggingface.co/datasets/Bioskop/BeccaER/resolve/main/README.md | ---
license: other
---
|
Bioskop | null | null | null | false | null | false | Bioskop/autotrain-data-beccacp | 2022-10-10T02:51:18.000Z | null | false | 3a206d464eacf0492d232e1a2d80ecfebdd6dc0c | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/Bioskop/autotrain-data-beccacp/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: beccacp
## Dataset Description
This dataset has been automatically processed by AutoTrain for project beccacp.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1600x838 RGB PIL image>",
"target": 1
},
{
"image": "<1200x628 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['Becca', 'Lucy'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 9 |
| valid | 4 |
|
susu727 | null | null | null | false | null | false | susu727/jahe1 | 2022-10-10T02:35:20.000Z | null | false | 97dbedc331f1ea8069ed26e03c0121fe701808f9 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/susu727/jahe1/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad-plain_text-07b8d6-1707959801 | 2022-10-10T03:43:04.000Z | null | false | 91ee647b51edc6a9c4256d2fe64f83593e49d168 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad-plain_text-07b8d6-1707959801/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: 21iridescent/distilroberta-base-finetuned-squad2-lwt
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilroberta-base-finetuned-squad2-lwt
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@crazymageqi@gmail.com](https://huggingface.co/crazymageqi@gmail.com) for evaluating this model. |
lcw99 | null | null | null | false | 766 | false | lcw99/wikipedia-korean-20221001 | 2022-10-10T03:55:17.000Z | null | false | 36f5e4bd11b69ae7aafba8b86e7b55aea3dc4bab | [] | [
"language:ko"
] | https://huggingface.co/datasets/lcw99/wikipedia-korean-20221001/resolve/main/README.md | ---
language:
- ko
--- |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159804 | 2022-10-10T04:44:20.000Z | null | false | f228a309e333d7f992089ab44951e19d794d54e3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159804/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159806 | 2022-10-10T05:09:11.000Z | null | false | 23b183ed5068335a41e7128da800134aa7a042ed | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159806/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159803 | 2022-10-10T04:40:56.000Z | null | false | 41cd1f2cfb65b63b8a2c571fad704a7f64e385a8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159803/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159802 | 2022-10-10T04:39:48.000Z | null | false | 5067892309121cade0cb7ce4231a96ad2e5736b3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159802/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159805 | 2022-10-10T04:47:39.000Z | null | false | 650a54cb2da8c4ca1093c5b498e6c0999255169c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159805/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559813 | 2022-10-10T05:19:15.000Z | null | false | b9cf3eeb5e208ffddf34723a1e1227c1fdd5a7a8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559813/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559812 | 2022-10-10T05:18:19.000Z | null | false | 19f463dd86eec9daad55fa037f232127535ec837 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559812/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559816 | 2022-10-10T05:46:34.000Z | null | false | cfc6cc3d10c7e7875c31082d2c031b19165fa071 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559816/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559814 | 2022-10-10T05:22:41.000Z | null | false | c5aca6e7b5825b9e2a2b864d33e90cd1436c7665 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559814/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559815 | 2022-10-10T05:25:28.000Z | null | false | 39766769c99aa887f9adf4da7b08f7b28539cc6d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-7f6ba0-1708559815/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-b86aaf-1709259817 | 2022-10-10T07:57:16.000Z | null | false | cd3fc7ebe3bf95f1f800f50448b0361f7f43a06a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-b86aaf-1709259817/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: gpt2
metrics: ['f1']
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
dennlinger | null | @article{aumiller-etal-2022-eur,
author = {Aumiller, Dennis and Chouhan, Ashish and Gertz, Michael},
title = {{EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain}},
journal = {CoRR},
volume = {abs/2210.13448},
eprinttype = {arXiv},
eprint = {2210.13448},
url = {https://arxiv.org/abs/2210.13448}
} | The EUR-Lex-Sum dataset is a multilingual resource intended for text summarization in the legal domain.
It is based on human-written summaries of legal acts issued by the European Union.
It distinguishes itself by introducing a smaller set of high-quality human-written samples,
each of which have much longer references (and summaries!) than comparable datasets.
Additionally, the underlying legal acts provide a challenging domain-specific application to legal texts,
which are so far underrepresented in non-English languages.
For each legal act, the sample can be available in up to 24 languages
(the officially recognized languages in the European Union);
the validation and test samples consist entirely of samples available in all languages,
and are aligned across all languages at the paragraph level. | false | 158 | false | dennlinger/eur-lex-sum | 2022-11-11T14:25:06.000Z | null | false | dab944b274fe6e047f0cc6b8dc5e0ca68f4dcd36 | [] | [
"arxiv:2210.13448",
"annotations_creators:found",
"annotations_creators:expert-generated",
"language:bg",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:de",
"language:el",
"language:hu",
"language:ga",
... | https://huggingface.co/datasets/dennlinger/eur-lex-sum/resolve/main/README.md | ---
annotations_creators:
- found
- expert-generated
language:
- bg
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- de
- el
- hu
- ga
- it
- lv
- lt
- mt
- pl
- pt
- ro
- sk
- sl
- es
- sv
language_creators:
- found
- expert-generated
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: eur-lex-sum
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- legal
- eur-lex
- expert summary
- parallel corpus
- multilingual
task_categories:
- translation
- summarization
---
# Dataset Card for the EUR-Lex-Sum Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/achouhan93/eur-lex-sum
- **Paper:** [EUR-Lex-Sum: A Multi-and Cross-lingual Dataset for Long-form Summarization in the Legal Domain](https://arxiv.org/abs/2210.13448)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](mailto:aumiller@informatik.uni-heidelberg.de)
### Dataset Summary
The EUR-Lex-Sum dataset is a multilingual resource intended for text summarization in the legal domain.
It is based on human-written summaries of legal acts issued by the European Union.
It distinguishes itself by introducing a smaller set of high-quality human-written samples, each of which have much longer references (and summaries!) than comparable datasets.
Additionally, the underlying legal acts provide a challenging domain-specific application to legal texts, which are so far underrepresented in non-English languages.
For each legal act, the sample can be available in up to 24 languages (the officially recognized languages in the European Union); the validation and test samples consist entirely of samples available in *all* languages, and are aligned across all languages at the paragraph level.
### Supported Tasks and Leaderboards
- `summarization`: The dataset is primarily suitable for summarization tasks, where it can be used as a small-scale training resource. The primary evaluation metric used in the underlying experiments is [ROUGE](https://huggingface.co/metrics/rouge). The EUR-Lex-Sum data is particularly interesting, because traditional lead-based baselines (such as lead-3) do not work well, given the extremely long reference summaries. However, we can provide reasonably good summaries by applying a modified LexRank approach on the paragraph level.
- `cross-lingual-summarization`: Given that samples of the dataset exist across multiple languages, and both the validation and test set are fully aligned across languages, this dataset can further be used as a cross-lingual benchmark. In these scenarios, language pairs (e.g., EN to ES) can be compared against monolingual systems. Suitable baselines include automatic translations of gold summaries, or translations of simple LexRank-generated monolingual summaries.
- `long-form-summarization`: We further note the particular case for *long-form summarization*. In comparison to news-based summarization datasets, this resource provides around 10x longer *summary texts*. This is particularly challenging for transformer-based models, which struggle with limited context lengths.
### Languages
The dataset supports all [official languages of the European Union](https://european-union.europa.eu/principles-countries-history/languages_en). At the time of collection, those were 24 languages:
Bulgarian, Croationa, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, and Swedish.
Both the reference texts, as well as the summaries, are translated from an English original text (this was confirmed by private correspondence with the Publications Office of the European Union). Translations and summaries are written by external (professional) parties, contracted by the EU.
Depending on availability of document summaries in particular languages, we have between 391 (Irish) and 1505 (French) samples available. Over 80% of samples are available in at least 20 languages.
## Dataset Structure
### Data Instances
Data instances contain fairly minimal information. Aside from a unique identifier, corresponding to the Celex ID generated by the EU, two further fields specify the original long-form legal act and its associated summary.
```
{
"celex_id": "3A32021R0847",
"reference": "REGULATION (EU) 2021/847 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\n [...]"
"summary": "Supporting EU cooperation in the field of taxation: Fiscalis (2021-2027)\n\n [...]"
}
```
### Data Fields
- `celex_id`: The [Celex ID](https://eur-lex.europa.eu/content/tools/eur-lex-celex-infographic-A3.pdf) is a naming convention used for identifying EU-related documents. Among other things, the year of publication and sector codes are embedded in the Celex ID.
- `reference`: This is the full text of a Legal Act published by the EU.
- `summary`: This field contains the summary associated with the respective Legal Act.
### Data Splits
We provide pre-split training, validation and test splits.
To obtain the validation and test splits, we randomly assigned all samples that are available across all 24 languages into two equally large portions. In total, 375 instances are available in 24 languages, which means we obtain a validation split of 187 samples and 188 test instances.
All remaining instances are assigned to the language-specific training portions, which differ in their exact size.
We particularly ensured that no duplicates exist across the three splits. For this purpose, we ensured that no exactly matching reference *or* summary exists for any sample. Further information on the length distributions (for the English subset) can be found in the paper.
## Dataset Creation
### Curation Rationale
The dataset was curated to provide a resource for under-explored aspects of automatic text summarization research.
In particular, we want to encourage the exploration of abstractive summarization systems that are not limited by the usual 512 token context window, which usually works well for (short) news articles, but fails to generate long-form summaries, or does not even work with longer source texts in the first place.
Also, existing resources primarily focus on a single (and very specialized) domain, namely news article summarization. We wanted to provide a further resource for *legal* summarization, for which many languages do not even have any existing datasets.
We further noticed that no previous system had utilized the human-written samples from the [EUR-Lex platform](https://eur-lex.europa.eu/homepage.html), which provide an excellent source for training instances suitable for summarization research. We later found out about a resource created in parallel based on EUR-Lex documents, which provides a [monolingual (English) corpus](https://github.com/svea-klaus/Legal-Document-Summarization) constructed in similar fashion. However, we provide a more thorough filtering, and extend the process to the remaining 23 EU languages.
### Source Data
#### Initial Data Collection and Normalization
The data was crawled from the aforementioned EUR-Lex platform. In particular, we only use samples which have *HTML* versions of the texts available, which ensure the alignment across languages, given that translations have to retain the original paragraph structure, which is encoded in HTML elements.
We further filter out samples that do not have associated document summaries available.
One particular design choice has to be expanded upon: For some summaries, *several source documents* are considered as an input by the EU. However, since we construct a single-document summarization corpus, we decided to use the **longest reference document only**. This means we explicitly drop the other reference texts from the corpus.
One alternative would have been to concatenated all relevant source texts; however, this generally leads to degradation of positional biases in the text, which can be an important learned feature for summarization systems. Our paper details the effect of this decision in terms of n-gram novelty, which we find is affected by the processing choice.
#### Who are the source language producers?
The language producers are external professionals contracted by the European Union offices. As previously noted, all non-English texts are generated from the respective English document (all summaries are direct translations the English summary, all reference texts are translated from the English reference text).
No further information on the demographic of annotators is provided.
### Annotations
#### Annotation process
The European Union publishes their [annotation guidelines](https://etendering.ted.europa.eu/cft/cft-documents.html?cftId=6490) for summaries, which targets a length between 600-800 words.
No information on the guidelines for translations is known.
#### Who are the annotators?
The language producers are external professionals contracted by the European Union offices. No further information on the annotators is available.
### Personal and Sensitive Information
The original text was not modified in any way by the authors of this dataset. Explicit mentions of personal names can occur in the dataset, however, we rely on the European Union that no further sensitive information is provided in these documents.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset can be used to provide summarization systems in languages that are previously under-represented. For example, language samples in Irish and Maltese (among others) enable the development and evaluation for these languages.
A successful cross-lingual system would further enable the creation of automated legal summaries for legal acts, possibly enabling foreigners in European countries to automatically translate similar country-specific legal acts.
Given the limited amount of training data, this dataset is also suitable as a test bed for low-resource approaches, especially in comparsion to strong unsupervised (extractive) summarization systems.
We also note that the summaries are explicitly provided as "not legally binding" by the EU. The implication of left-out details (a necessary evil of summaries) implies the existence of differences between the (legally binding) original legal act.
Risks associated with this dataset also largely stem from the potential application of systems trained on it. Decisions in the legal domain require careful analysis of the full context, and should not be made based on system-generated summaries at this point in time. Known biases of summarization, specifically factual hallucinations, should act as further deterrents.
### Discussion of Biases
Given the availability bias, some of the languages in the dataset are more represented than others. We attempt to mitigate influence on the evaluation by providing validation and test sets of the same size across all languages.
Given that we require the availability of HTML documents, we see a particular temporal bias in our dataset, which features more documents from the years of 1990 onwards, simply due to the increase in EU-related activities, but also the native use of the internet as a data storage.
This could imply a particular focus on more recent topics (e.g., Brexit, renewable eneriges, etc. come to mind).
Finally, due to the source of these documents being the EU, we expect a natural bias towards EU-centric (and therefore Western-centric) content; other nations and continents will be under-represented in the data.
### Other Known Limitations
As previously outlined, we are aware of some summaries relating to multiple (different) legal acts. For these samples, only one (the longest) text will be available in our dataset.
## Additional Information
### Dataset Curators
The web crawler was originally implemented by Ashish Chouhan.
Post-filtering and sample correction was later performed by Dennis Aumiller.
Both were PhD students employed at the Database Systems Research group of Heidelberg University, under the guidance of Prof. Dr. Michael Gertz.
### Licensing Information
Data from the EUR-Lex platform is available under the CC-BY SA 4.0 license. We redistribute the dataset under the same license.
### Citation Information
For the pre-print version, please cite:
```
@article{aumiller-etal-2022-eur,
author = {Aumiller, Dennis and Chouhan, Ashish and Gertz, Michael},
title = {{EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain}},
journal = {CoRR},
volume = {abs/2210.13448},
eprinttype = {arXiv},
eprint = {2210.13448},
url = {https://arxiv.org/abs/2210.13448}
}
``` |
vogloblinsky | null | null | null | false | 8 | false | vogloblinsky/skateboarding-tricks | 2022-10-10T12:38:17.000Z | null | false | b59e463c9599e735fe6da105cdc0c9509153062e | [] | [
"license:mit",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/vogloblinsky/skateboarding-tricks/resolve/main/README.md | ---
license: mit
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Skateboarding tricks'
size_categories:
- n<1K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Skateboarding tricks
Dataset used to train [Text to skateboarding image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning).
For each row the dataset contains `image` and `text` keys.
`image` is a varying size PIL jpeg, and `text` is the accompanying text caption.
|
aari1995 | null | null | null | false | 9 | false | aari1995/false_friends_en_de | 2022-10-10T11:42:11.000Z | null | false | 9a7f50e1fa08109c89fef504eb7095861057d455 | [] | [] | https://huggingface.co/datasets/aari1995/false_friends_en_de/resolve/main/README.md | This Dataset contains many (as many as I could find) False Friends for English and German Language.
False Friends are words that are same / similar in sound or spelling.
This dataset is created as part of the Stanford NLU course XCS224u final project.
**Example:**
A) False Friend Word: "bald"
B) Meaning of Word in English: "not having hair"
C) Actual, Translated Meaning of German Word: "soon"
D) Translation of English "bald" in German: "glatzköpfig"
**Columns:**
False Friend / False Friend Word: Like A), A word with different meanings in both languages.
Correct False Friend Synonym: A true German synonym for the A) False Friend.
Wrong False Friend Synonym: Like D), a translation of the English False Friend into German.
Sentence: A Sentence, where the A) False Friend Word is used.
Correct Sentence: The Same Sentence as before, however the False Friend Word A) is replaced by The Correct False Friend Synonym
Wrong Sentence: The Same Sentence as before, however the False Friend Word A) is replaced by The Wrong False Friend Synonym like D)
Correct English Translation: The actual meaning of the False Friend, like in C)
Wrong English Translation: The wrong meaning of the False Friend, a word sounding or is written similar / same as the False Friend.
Source: The Source (Website) where the False Friend was mentioned. |
Gr3en | null | null | null | false | null | false | Gr3en/MIlo_Rau_Grief_and_Beauty | 2022-10-10T09:02:24.000Z | null | false | cc026d85280aa8a3695332f632b428f1c523e695 | [] | [] | https://huggingface.co/datasets/Gr3en/MIlo_Rau_Grief_and_Beauty/resolve/main/README.md | annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: Grief and Beauty by Milo Rau
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: [] |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-666f04-1710259829 | 2022-10-10T09:53:28.000Z | null | false | 238d80ffa879a51e86ae88dd8d545c951d92acbd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampletx"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampletx-constructive-666f04-1710259829/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: gpt2
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: constructive
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2
* Dataset: phpthinh/exampletx
* Config: constructive
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
bigscience | null | @misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. | false | 14 | false | bigscience/xP3 | 2022-11-04T01:55:44.000Z | null | false | c2dec5fc8aceae0a4b00551af5e903cd919ab074 | [] | [
"arxiv:2211.01786",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"lang... | https://huggingface.co/datasets/bigscience/xP3/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + our evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-KETI-AIR__korquad-v1.0-acb0d1-1711659840 | 2022-10-10T12:25:13.000Z | null | false | 58ac54322470b66af0c4c947047cd737fe3bf242 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:KETI-AIR/korquad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-KETI-AIR__korquad-v1.0-acb0d1-1711659840/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- KETI-AIR/korquad
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: ['angelina-wang/directional_bias_amplification']
dataset_name: KETI-AIR/korquad
dataset_config: v1.0
dataset_split: train
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: KETI-AIR/korquad
* Config: v1.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HANSOLYOO](https://huggingface.co/HANSOLYOO) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-3783aa-1711959846 | 2022-10-10T13:24:10.000Z | null | false | 89b6ab985e756336632c5d97fb0429dc5ef12756 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-3783aa-1711959846/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: mrp/bert-finetuned-squad
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mrp/bert-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. |
olm | null | null | null | false | 338 | false | olm/olm-CC-MAIN-2022-21-sampling-ratio-0.14775510204 | 2022-11-04T17:13:26.000Z | null | false | ece7013ae771554dd462b0e744d20bf601b31fea | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"tags:pretraining",
"tags:language modelling",
"tags:common crawl",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-CC-MAIN-2022-21-sampling-ratio-0.14775510204/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM May 2022 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM May 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 15% of the May 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
olm | null | null | null | false | 297 | false | olm/olm-CC-MAIN-2022-27-sampling-ratio-0.16142697881 | 2022-11-04T17:13:43.000Z | null | false | 710db3c996b2ed741ba555cbe277a7c27566d0c0 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"tags:pretraining",
"tags:language modelling",
"tags:common crawl",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-CC-MAIN-2022-27-sampling-ratio-0.16142697881/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM June/July 2022 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM June/July 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the June/July 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
futura555 | null | null | null | false | null | false | futura555/test_rendering | 2022-10-10T16:12:33.000Z | null | false | 106cb46160afb4151c8a0818369135b97016428f | [] | [
"license:cc-by-nc-2.0"
] | https://huggingface.co/datasets/futura555/test_rendering/resolve/main/README.md | ---
license: cc-by-nc-2.0
---
|
Arjun1234 | null | null | null | false | null | false | Arjun1234/Arjun | 2022-10-10T16:11:27.000Z | null | false | 111153981b3e2fcf277938d82dce5fd7b80c6d5f | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Arjun1234/Arjun/resolve/main/README.md | ---
license: apache-2.0
---
|
Appdemon | null | null | null | false | null | false | Appdemon/profile | 2022-10-10T17:46:54.000Z | null | false | 08b3038756476d5e56bfb40da882c17647e88253 | [] | [
"license:other"
] | https://huggingface.co/datasets/Appdemon/profile/resolve/main/README.md | ---
license: other
---
|
olm | null | null | null | false | 4 | false | olm/olm-wikipedia-20220701 | 2022-10-18T19:18:45.000Z | null | false | 062625dc342d3391112ce81e0a1f103f702a5732 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"tags:pretraining",
"tags:language modelling",
"tags:wikipedia",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-wikipedia-20220701/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM August 2022 Wikipedia
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- pretraining
- language modelling
- wikipedia
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM August 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an August 2022 Wikipedia snapshot. |
olm | null | null | null | false | 61 | false | olm/olm-wikipedia-20221001 | 2022-10-18T19:18:07.000Z | null | false | e4f891065dcf0b7d404f3c14d6cbb610ee33e038 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"tags:pretraining",
"tags:language modelling",
"tags:wikipedia",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-wikipedia-20221001/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM October 2022 Wikipedia
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- pretraining
- language modelling
- wikipedia
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM October 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an October 2022 Wikipedia snapshot. |
jajejijuasjuas | null | null | null | false | null | false | jajejijuasjuas/alfonso | 2022-10-10T18:18:39.000Z | null | false | 7ef6e591bdd8c2b532a808f9568b42107038aef1 | [] | [
"license:mit"
] | https://huggingface.co/datasets/jajejijuasjuas/alfonso/resolve/main/README.md | ---
license: mit
---
|
julien-c | null | null | null | false | 12 | false | julien-c/titanic-survival | 2022-10-10T19:20:30.000Z | null | false | fc5895c785d2eb73f4071a40385344c74714f9d2 | [] | [
"license:cc",
"tags:tabular-classification",
"task_categories:tabular-classification"
] | https://huggingface.co/datasets/julien-c/titanic-survival/resolve/main/README.md | ---
license: cc
tags:
- tabular-classification
task_categories:
- tabular-classification
---
## Titanic Survival
from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html |
muchojarabe | null | null | null | false | null | false | muchojarabe/images-mxjr | 2022-10-10T21:17:07.000Z | null | false | 27bcbcb611387e7476310e9e9efa471921ad0807 | [] | [
"license:cc"
] | https://huggingface.co/datasets/muchojarabe/images-mxjr/resolve/main/README.md | ---
license: cc
---
|
simioterapia | null | null | null | false | 1 | false | simioterapia/otoniel | 2022-10-10T21:07:42.000Z | null | false | 925491e6eadf4687ec121c6e99138729540c0152 | [] | [] | https://huggingface.co/datasets/simioterapia/otoniel/resolve/main/README.md | |
Mintykev | null | null | null | false | null | false | Mintykev/Test-Style | 2022-10-10T23:33:48.000Z | null | false | e8014e52ee40592a516f3e66ef04393aa9c59e38 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Mintykev/Test-Style/resolve/main/README.md | ---
license: cc
---
|
bob80333 | null | null | null | false | null | false | bob80333/animefacesv2 | 2022-10-13T00:46:25.000Z | null | false | 8c59624177cfa46af7177482c266633bd83aace7 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/bob80333/animefacesv2/resolve/main/README.md | ---
license: unknown
---
|
RTT1 | null | null | null | false | null | false | RTT1/SentiMix | 2022-10-11T05:43:18.000Z | null | false | 205f0391fc1f10320ec3c10708eaa27e88db04c7 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/RTT1/SentiMix/resolve/main/README.md | ---
license: openrail
---
|
arpelarpe | null | null | Nota lyd- og tekstdata
Datasættet indeholder både tekst- og taledata fra udvalgte dele af Nota's lydbogsbiblotek. Datasættet består af
over 500 timers oplæsninger og medfølgende transkriptioner på dansk. Al lyddata er i .wav-format, mens tekstdata
er i .txt-format.
I data indgår indlæsninger af Notas eget blad "Inspiration" og "Radio/TV", som er udgivet i perioden 2007 til 2022.
Nota krediteres for arbejdet med at strukturere data, således at tekst og lyd stemmer overens.
Nota er en institution under Kulturministeriet, der gør trykte tekster tilgængelige i digitale formater til personer
med synshandicap og læsevanskeligheder, fx via produktion af lydbøger og oplæsning af aviser, magasiner, mv. | false | 1 | false | arpelarpe/nota | 2022-10-11T07:56:49.000Z | null | false | b37f50217a7522a07f588121ecb6c6b06a6a4133 | [] | [
"license:cc0-1.0",
"language:da",
"multilinguality:monolingual",
"task_categories:automatic-speech-recognition"
] | https://huggingface.co/datasets/arpelarpe/nota/resolve/main/README.md | ---
pretty_name: Nota
license:
- cc0-1.0
language:
- da
multilinguality:
- monolingual
task_categories:
- automatic-speech-recognition
---
# Dataset Card Nota Lyd- og tekstdata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Disclaimer](#disclaimer)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://sprogteknologi.dk/dataset/notalyd-ogtekstdata
- **Data Storage Url:** https://sprogtek-ressources.digst.govcloud.dk/nota/
- **Point of Contact:** info@sprogteknologi.dk
### Dataset Summary
This data was created by the public institution Nota (https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".
The dataset has been published as a part of the initiative sprogteknologi.dk, within the Danish Agency for Digital Government (www.digst.dk).
336 GB available data, containing voice recordings and accompanying transcripts.
Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Danish
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
`
{'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}
`
### Data Fields
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence that was read by the speaker
### Data Splits
The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.
## Dataset Creation
### Disclaimer
There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.
There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings.
Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://sprogteknologi.dk/
Contact info@sprogteknologi.dk if you have questions regarding use of data.
They gladly receive inputs and ideas on how to distribute the data.
### Licensing Information
[CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### |
YaYaB | null | null | null | false | 1 | false | YaYaB/magic-blip-captions | 2022-10-11T08:06:45.000Z | null | false | 1858915e48782e328a4b4f3e0288676707189fe9 | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:YaYaB/magic-creature-blip-captions",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/YaYaB/magic-blip-captions/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Subset of Magic card (Creature only) BLIP captions'
size_categories:
- n<1K
source_datasets:
- YaYaB/magic-creature-blip-captions
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Magic card BLIP captions
_Dataset used to train [Magic card text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for Magic Card images collected from the web. Original images were obtained from [Scryfall](https://scryfall.com/) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> A woman holding a flower

> two knights fighting

> a card with a unicorn on it
## Citation
If you use this dataset, please cite it as:
```
@misc{yayab2022onepiece,
author = {YaYaB},
title = {Magic card creature split BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/YaYaB/magic-blip-captions/}}
}
``` |
kiddelpool | null | null | null | false | null | false | kiddelpool/HarryPotter | 2022-10-11T08:34:21.000Z | null | false | 446d292da935b1ad1a04d38725494639bf13affc | [] | [
"license:openrail"
] | https://huggingface.co/datasets/kiddelpool/HarryPotter/resolve/main/README.md | ---
license: openrail
---
|
millawell | null | null | null | false | 84 | false | millawell/wikipedia_field_of_science | 2022-10-11T08:26:28.000Z | null | false | 747981da8049e2f3fbebbd1f3bfbb68d1b952733 | [] | [
"license:cc-by-sa-3.0"
] | https://huggingface.co/datasets/millawell/wikipedia_field_of_science/resolve/main/README.md | ---
license: cc-by-sa-3.0
---
|
mwhanna | null | @inproceedings{hanna-etal-2022-act,
title = "ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments",
author = "Hanna, Michael and
Pedeni, Federico and
Suglia, Alessandro and
Testoni, Alberto and
Bernardi, Raffaella",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, South Korea",
publisher = "International Committee on Computational Linguistics",
} | ACT-Thor is a dataset intended for evaluating models' understanding of actions. | false | 7 | false | mwhanna/ACT-Thor | 2022-10-11T15:29:44.000Z | null | false | 04510d5965da49656ac1a0bd2599d1c272a3f7ef | [] | [] | https://huggingface.co/datasets/mwhanna/ACT-Thor/resolve/main/README.md | # Dataset Card for ACT-Thor
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/hannamw/ACT-Thor
- **Paper:** Paper ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments (COLING 2022; Link to be added soon)
- **Point of Contact:** Michael Hanna (m.w.hanna@uva.nl)
### Dataset Summary
This dataset is intended to test models' abilities to understand actions, and to do so in a controlled fashion. It is generated automatically using [AI2-Thor](https://ai2thor.allenai.org/), and thus contains images of a virtual house. Models receive an image of an object in a house (the before-image), an action, and four after-images that might have potentially resulted from performing the action on the object. Then, they must predict which of the after-images actually resulted from performing the action in the before-image.
### Supported Tasks
This dataset implements the contrast set task discussed in the paper: given a before image and an action, predict which of 4 after images is the actual result of performing the action in the before image. However, the raw data (not included here) could be used for other tasks, e.g. given a before and after image, infer the action taken. Feel free to reach out and request the full data (with all of the metadata and other information that might be useful), or collect it automatically using the scripts available on the project's [GitHub repo](https://github.com/hannamw/ACT-Thor)!
## Dataset Structure
### Data Instances
There are 4441 instances in the dataset, each consisting of the fields below:
### Data Fields
- id: integer ID of the example
- object: name (string) of the object of interest
- action: name (string) of the action taken
- action_id: integer ID of the action taken
- scene: the ID (string) of the scene from which this example comes
- before_image: The before image
- after_image_{0-3}: The after images, from which the correct image is to be chosen
- label: The index (0-3) of the correct after image
Only the action_id, before_image, and after_image need be fed into the model, which should predict the label.
### Data Splits
We create 3 different train-valid-test splits. In the sample split, each examples has been randomly assigned to either the train, valid, and test split, without any special organization. The object split introduces new objects in the test split, to test object generalization. Finally, the scene split is organized such that the scenes contained in train, valid, and test are disjoint (to test scene generalization).
## Dataset Creation
### Curation Rationale
This dataset was curated for two reasons. Its main purpose is to test models' abilities to understand the consequences of actions. However, its creation also intends to showcase the potential of virtual platforms as sites for the collection of data, especially in a highly controlled fashion.
### Source Data
#### Initial Data Collection and Normalization
All of the data is collected by navigating throughout AI2-Thor virtual environments and recording images in metadata. Check out the paper, where we describe this process in detail!
### Annotations
#### Annotation process
This dataset is generated entirely automatically using AI2-Thor, so there are no annotations. In the paper, we discuss annotations created by humans performing the task; these are only used to check that the task is feasible for humans. We're happy to release these if requested; these were collected from students at 2 universities.
## Considerations for Using the Data
### Discussion of Biases
This paper uses artificially generated images of homes from AI2-Thor. Because of the limited variety of homes, a model performing well on this dataset might not perform well in the context of other homes (e.g. of different designs, from different cultures, etc.)
### Other Known Limitations
This dataset is small, so updating it to include a greater diversity of actions / objects would be very useful. If these actions / objects are added to AI2-Thor, more data can be collected using the script on our [GitHub repo](https://github.com/hannamw/ACT-Thor).
## Additional Information
### Dataset Curators
Michael Hanna (m.w.hanna@uva.nl), Federico Pedeni (federico.pedeni@studenti.unitn.it)
### Licensing Information
Creative Commons 4.0
### Citation Information
Please cite the associated COLING 2022 paper, "Paper ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments". The full citation will be added here when the paper is published.
### Contributions
Thanks to [@hannamw](https://github.com/hannamw) for adding this dataset. |
Intel | null | null | null | false | null | false | Intel/CoreSearch | 2022-10-21T17:16:15.000Z | null | false | 3bb59b4899fe920613d033770db928961848a035 | [] | [] | https://huggingface.co/datasets/Intel/CoreSearch/resolve/main/README.md | # The CoreSearch Dataset
A large-scale dataset for cross-document event coreference **search**</br>
- **Paper:** Cross-document Event Coreference Search: Task, Dataset and Modeling (link-TBD)
### Languages
English
## Load Dataset
You can read/download the dataset files following Huggingface Hub instructions:
## Citation
```
@inproceedings{TBD}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at <a href="https://github.com/AlonEirew/CoreSearch">https://github.com/AlonEirew/CoreSearch</a>. |
chavinlo | null | null | null | false | null | false | chavinlo/anime-face-video-dataset | 2022-10-11T16:33:03.000Z | null | false | 4b8d510e5b2ca37f76a5f5763434fb215dbc2d62 | [] | [
"license:agpl-3.0"
] | https://huggingface.co/datasets/chavinlo/anime-face-video-dataset/resolve/main/README.md | ---
license: agpl-3.0
---
# Help!!
We have a ton of still (non-moving) videos in the dataset. If you could somehow get rid of them please let me know!!!
# v0.1 Stats:
- Count: 11,300 gifs
- Extracted from: 40 anime videos
- Size: 250-ish MB
# Samples:
Directory View:

Individual:
<img src="https://huggingface.co/datasets/chavinlo/anime-face-video-dataset/resolve/main/garbage1.gif" alt="1" width="128" height="128"/> <img src="https://huggingface.co/datasets/chavinlo/anime-face-video-dataset/resolve/main/gabarge2.gif" alt="2" width="128" height="128"/>
# Info:
A dataset in GIF format for training [chavinlo/anime-video-diffusion](https://huggingface.co/chavinlo/anime-video-diffusion)
The data is in 64x64, 20 total frames format.
The original data was in MKV form, which was later croped using a [modified version of LAFD](https://github.com/chavinlo/light-anime-face-detector) to only include the faces. After that it was converted once again with mkv to limit the size, and total frame count, while mantaining duration length.
# Format:
The dataset is provided in two formats
- ZIP file
- Directory
# Issues:
There were two main issues found during the processing of the dataset:
## Shaky videos
Due to the face detector nature, the box had issues mantaining integrity and very often resized very quickly. This could be fixed by limiting the framerate of it (?).
## Still videos
The dataset has a lot of still videos which basically would serve no purpose as they are not moving. |
santyysilvaa | null | null | null | false | null | false | santyysilvaa/brisaa | 2022-10-11T16:19:28.000Z | null | false | 8e61b2a664f292d72a8ed5c9c382229eae9edf56 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/santyysilvaa/brisaa/resolve/main/README.md | ---
license: openrail
---
|
Stevvb | null | null | null | false | null | false | Stevvb/Joan | 2022-10-11T16:41:53.000Z | null | false | 3e70ecb78cc36b7aa3aaf92c3b6d2a847d97fc9b | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Stevvb/Joan/resolve/main/README.md | ---
license: openrail
---
|
alkzar90 | null | @ONLINE {rock-glacier-dataset,
author="CMM-Glaciares",
title="Rock Glacier Dataset",
month="October",
year="2022",
url="https://github.com/alcazar90/rock-glacier-detection"
} | TODO: Add a description... | false | 40 | false | alkzar90/rock-glacier-dataset | 2022-11-04T21:35:01.000Z | null | false | 00a0d1c5d2845a4cc6c88e420c056f8370648c82 | [] | [
"annotations_creators:human-curator",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:image-classification",
"task_ids:multi-class-image-classification"
] | https://huggingface.co/datasets/alkzar90/rock-glacier-dataset/resolve/main/README.md | ---
annotations_creators:
- human-curator
language:
- en
license:
- mit
pretty_name: RockGlacier
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for Rock Glacier Detection
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [RockGlacier Homepage](https://github.com/alcazar90/rock-glacier-detection)
- **Repository:** [alcazar90/rock-glacier-detection](https://github.com/alcazar90/rock-glacier-detection)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Rock Glacier Detection dataset with satelital images of rock glaciers in the Chilean Andes.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a satelitel images (from sentinel2), the goal of this task is to predict a rock glacier in the geographic area, if there any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=128x128 at 0x7FE652BE2FD0>,
'labels': 0
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"glaciar": 0,
"cordillera": 1
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1456 |364 |NA |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {rock-glacier-dataset,
author="CMM - Glaciares (UChile)",
title="Rock Glacier Dataset",
month="October",
year="2022",
url="https://github.com/alcazar90/rock-glacier-detection"
}
```
### Contributions
Thanks to...
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-2953e3-1725560272 | 2022-10-11T18:26:56.000Z | null | false | 936243dcb2a50cb01f6615041e3f84c789a9a6e9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-2953e3-1725560272/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: 21iridescent/distilbert-base-uncased-finetuned-squad
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilbert-base-uncased-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@smalllotus](https://huggingface.co/smalllotus) for evaluating this model. |
eliolio | null | null | null | false | 3 | false | eliolio/docvqa | 2022-10-11T21:10:16.000Z | docvqa | false | 55b605eda5bcee283265c3cca78be98e64d38b29 | [] | [
"arxiv:2007.00398",
"language:en",
"task_ids:document-question-answering"
] | https://huggingface.co/datasets/eliolio/docvqa/resolve/main/README.md | ---
language:
- en
paperswithcode_id: docvqa
pretty_name: DocVQA - A Dataset for VQA on Document Images
task_ids:
- document-question-answering
---
# DocVQA: A Dataset for VQA on Document Images
The DocVQA dataset can be downloaded from the [challenge page](https://rrc.cvc.uab.es/?ch=17) in RRC portal ("Downloads" tab).
## Dataset Structure
The DocVQA comprises 50, 000 questions framed on 12,767 images. The data is split randomly in an 80−10−10 ratio to train, validation and test splits.
- Train split: 39,463 questions, 10,194 images
- Validation split: 5,349 questions and 1,286 images
- Test split has 5,188 questions and 1,287 images.
## Resources and Additional Information
- More information can be found on the [challenge page](https://rrc.cvc.uab.es/?ch=17) and in the [DocVQA paper](https://arxiv.org/abs/2007.00398).
- Document images are taken from the [UCSF Industry Documents Library](https://www.industrydocuments.ucsf.edu/). It consists of a mix of printed, typewritten and handwritten content. A wide variety of document types appears in this dataset including letters, memos, notes, reports etc.
## Citation Information
```
@InProceedings{mathew2021docvqa,
author = {Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, CV},
title = {Docvqa: A dataset for vqa on document images},
booktitle = {Proceedings of the IEEE/CVF winter conference on applications of computer vision},
year = {2021},
pages = {2200--2209},
}
``` |
TuxedoDamager | null | null | null | false | null | false | TuxedoDamager/Nard_Style | 2022-10-11T18:36:00.000Z | null | false | 173349a9ded8d6f13cecc475a086ba8737e4c753 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/TuxedoDamager/Nard_Style/resolve/main/README.md | ---
license: afl-3.0
---
|
tadeyina | null | null | null | false | 2 | false | tadeyina/celeb-identities | 2022-10-15T22:46:29.000Z | null | false | efad5f97720b671c355049077b96026d6a313a3d | [] | [] | https://huggingface.co/datasets/tadeyina/celeb-identities/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: Brad_Pitt
1: Donald_Trump
2: Johnny_Depp
3: Kanye
4: Obama
splits:
- name: train
num_bytes: 370023.0
num_examples: 15
download_size: 368139
dataset_size: 370023.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awacke1 | null | null | null | false | null | false | awacke1/WikipediaSearchMemory | 2022-10-12T01:15:38.000Z | null | false | 22b1b3e305944c13c5f88488fecfc219682c7984 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/awacke1/WikipediaSearchMemory/resolve/main/README.md | ---
license: apache-2.0
---
|
awacke1 | null | null | null | false | null | false | awacke1/WikipediaSearch | 2022-11-16T19:37:56.000Z | null | false | efd5e65b1511adc47fadbfc3c187e54d7a4a22ff | [] | [] | https://huggingface.co/datasets/awacke1/WikipediaSearch/resolve/main/README.md | |
Austenooo | null | null | null | false | null | false | Austenooo/Snow_White_Images | 2022-10-12T02:54:19.000Z | null | false | 721385cbad5f6417bd1a934744839d8d9e2d7ac3 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Austenooo/Snow_White_Images/resolve/main/README.md | ---
license: openrail
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660343 | 2022-10-12T04:15:09.000Z | null | false | f3e92292484493e2928caa57ab762a460b4c7d64 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660343/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760345 | 2022-10-12T03:54:26.000Z | null | false | 7bd2422ccdf8548c7f437bde9c3f65b056ff9d4b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760345/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: filter
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampleem
* Config: filter
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760348 | 2022-10-12T04:14:40.000Z | null | false | b764b516764e74d9ff0975ea467da7a0760b2523 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760348/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: filter
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/exampleem
* Config: filter
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760347 | 2022-10-12T04:05:55.000Z | null | false | 81ede9a00a68734f13bb0ab5808af8d016e9024f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760347/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: filter
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampleem
* Config: filter
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660340 | 2022-10-12T03:54:50.000Z | null | false | 94545de524aef3ee09ddedd7b89e4a643867bd86 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660340/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660342 | 2022-10-12T04:05:21.000Z | null | false | cbcddea6640feae5f27c244e19046376033efba2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660342/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760346 | 2022-10-12T03:58:57.000Z | null | false | d4c807cc634c6341b7deac467f3c4b6845a88815 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760346/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: filter
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampleem
* Config: filter
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660341 | 2022-10-12T03:57:52.000Z | null | false | 73622dcf570230819042ae3958cf718313679fe2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660341/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660344 | 2022-10-12T05:09:11.000Z | null | false | 1707ab105a94de8ff916d1bd0b27fecc0794c26c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-raw-eb2c05-1728660344/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampleem
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760349 | 2022-10-12T05:07:59.000Z | null | false | 45701e2cc8bb4be3cfb76e0bdf0ebc4a5f170a8f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/exampleem"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__exampleem-filter-918293-1728760349/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampleem
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampleem
dataset_config: filter
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampleem
* Config: filter
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
MickyMike | null | null | null | false | 143 | false | MickyMike/cvefixes_bigvul | 2022-10-12T10:31:00.000Z | null | false | 922b82a1fa268fd4c0c1bfccf2b19a65cb2d0ab0 | [] | [
"license:mit"
] | https://huggingface.co/datasets/MickyMike/cvefixes_bigvul/resolve/main/README.md | ---
license: mit
---
|
ejcho623 | null | null | null | false | 8 | false | ejcho623/undraw-raw | 2022-10-12T19:03:19.000Z | null | false | 975066cb621855cb516283f8326c4eecf02c2532 | [] | [] | https://huggingface.co/datasets/ejcho623/undraw-raw/resolve/main/README.md | Woot! |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160385 | 2022-10-12T08:14:48.000Z | null | false | 35bed6b7936cc3dfbebba2eb1acddbbbbc179072 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/examplehsd"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160385/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplehsd
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: ['f1']
dataset_name: phpthinh/examplehsd
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/examplehsd
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160386 | 2022-10-12T08:26:31.000Z | null | false | 209080c016b0fe9ec69fef87df59e03d29946314 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/examplehsd"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160386/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplehsd
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: ['f1']
dataset_name: phpthinh/examplehsd
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/examplehsd
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160389 | 2022-10-12T13:23:31.000Z | null | false | 8befac237fc835dbda6710f519490434d2a4597b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/examplehsd"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160389/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplehsd
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: ['f1']
dataset_name: phpthinh/examplehsd
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/examplehsd
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160388 | 2022-10-12T09:34:26.000Z | null | false | c577e2da490be30c419c4de02174c7531847265c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/examplehsd"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160388/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplehsd
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: ['f1']
dataset_name: phpthinh/examplehsd
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/examplehsd
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160387 | 2022-10-12T08:59:02.000Z | null | false | 1577ea3dcf1af03119dd19acce4ce13ce03f67f7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/examplehsd"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__examplehsd-raw-ff3db7-1730160387/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplehsd
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: ['f1']
dataset_name: phpthinh/examplehsd
dataset_config: raw
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/examplehsd
* Config: raw
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | false | 5,825 | false | mozilla-foundation/common_voice_11_0 | 2022-10-25T15:34:31.000Z | common-voice | false | d91946acf316508b85ed0c87611bbbdf21bd1285 | [] | [
"arxiv:1912.06670",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"license:cc0-1.0",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"size_categories:1M<n<10M",
"source_dataset... | https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
bn:
- 100K<n<1M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 1K<n<10K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 1M<n<10M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 1K<n<10K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
it:
- 100K<n<1M
ja:
- 10K<n<100K
ka:
- 10K<n<100K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lt:
- 10K<n<100K
lv:
- 1K<n<10K
mdf:
- n<1K
mhr:
- 100K<n<1M
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mrj:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- 1K<n<10K
sk:
- 10K<n<100K
skr:
- 1K<n<10K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
ti:
- n<1K
tig:
- n<1K
tok:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
tw:
- n<1K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 11.0
language_bcp47:
- ab
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- it
- ja
- ka
- kab
- kk
- kmr
- ky
- lg
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nan-tw
- ne-NP
- nl
- nn-NO
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sr
- sv-SE
- sw
- ta
- th
- ti
- tig
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yue
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
---
# Dataset Card for Common Voice Corpus 11.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 24210 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 16413 validated hours in 100 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_11_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
king007 | null | null | null | false | null | false | king007/testing | 2022-10-12T09:53:24.000Z | null | false | f2e4119ca296310f84dca6da0ab33f82d479c517 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/king007/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
Xeustudio | null | null | null | false | null | false | Xeustudio/ibaillanos | 2022-10-12T10:14:04.000Z | null | false | 236b40dc05304a131dce9142c6a014fbf910b6ef | [] | [] | https://huggingface.co/datasets/Xeustudio/ibaillanos/resolve/main/README.md | |
arincon | null | null | null | false | 22 | false | arincon/paws-es-paraphrase | 2022-10-12T12:11:45.000Z | null | false | 5916cbf0414556e5562dd64dd5ebca3d856b2f77 | [] | [] | https://huggingface.co/datasets/arincon/paws-es-paraphrase/resolve/main/README.md | paws-x filtered to finetune transformer model to generate paraphrase spanish sentences
filtered dataset paws-x, es with label==1 and sentence1!=sentence2 |
Mirrar | null | null | null | false | null | false | Mirrar/Longcu | 2022-10-12T15:31:45.000Z | null | false | fdcf096a04897234e15862188b93fa6f5675e208 | [] | [
"license:mpl-2.0"
] | https://huggingface.co/datasets/Mirrar/Longcu/resolve/main/README.md | ---
license: mpl-2.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-e08cac-1731660420 | 2022-10-12T12:16:04.000Z | null | false | 6c5331e565ec477e22a2d83126ddb331c90f759d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-e08cac-1731660420/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: gpt2
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@tomekkorbak](https://huggingface.co/tomekkorbak) for evaluating this model. |
allenai | null | null | null | false | 1 | false | allenai/multixscience_dense_max | 2022-11-05T23:07:10.000Z | multi-xscience | false | ff5e208491c156d8126b21648e82d1c1bc9527b2 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:summarization-other-paper-abstract-generation"
] | https://huggingface.co/datasets/allenai/multixscience_dense_max/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.0573 | 0.3785 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.059 | 0.3831 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.058 | 0.3794 | |
allenai | null | null | null | false | 1 | false | allenai/multixscience_dense_mean | 2022-11-05T23:10:06.000Z | multi-xscience | false | 6f0861061fe3d7bc75c5d78a0a3fed2267ef8037 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:summarization-other-paper-abstract-generation"
] | https://huggingface.co/datasets/allenai/multixscience_dense_mean/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==4`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.1551 | 0.2357 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.1603 | 0.2432 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.1612 | 0.2440 | |
allenai | null | null | null | false | 1 | false | allenai/multixscience_dense_oracle | 2022-11-06T21:50:48.000Z | multi-xscience | false | 36cbd6168216b0ae8df139aa0a1e463b0107dbc0 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:summarization-other-paper-abstract-generation"
] | https://huggingface.co/datasets/allenai/multixscience_dense_oracle/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.2005 | 0.2005 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.2026 | 0.2026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.2081 | 0.2081 | |
allenai | null | null | null | false | 1 | false | allenai/cochrane_dense_mean | 2022-11-06T00:13:10.000Z | multi-document-summarization | false | f25a31127151e2519f75695ad175b0b76a8f5f5f | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/cochrane_dense_mean/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.3438 | 0.4800 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.3534 | 0.4913 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai | null | null | null | false | 3 | false | allenai/cochrane_dense_max | 2022-11-06T00:11:08.000Z | multi-document-summarization | false | 6d79c725843ab7de1d0863a379d86edcbaf7f264 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/cochrane_dense_max/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.1959 | 0.6268 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.1995 | 0.6433 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai | null | null | null | false | 1 | false | allenai/cochrane_dense_oracle | 2022-11-06T21:53:50.000Z | multi-document-summarization | false | d42bda16a71e1d35ed9b895d35d8e11a9bd624e4 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/cochrane_dense_oracle/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever.
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.4487 | 0.4487 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.4424 | 0.4424 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
shtnai | null | null | null | false | null | false | shtnai/victor | 2022-10-12T13:48:30.000Z | null | false | 5134d903d6e1b9e1e120e9dcfd5a193c738f67fc | [] | [
"license:other"
] | https://huggingface.co/datasets/shtnai/victor/resolve/main/README.md | ---
license: other
---
|
allenai | null | null | null | false | 1 | false | allenai/ms2_dense_max | 2022-11-05T23:31:59.000Z | multi-document-summarization | false | 2ab4c73b5ff576a79f78576d30ff210edd702029 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/ms2_dense_max/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.1932 | 0.2895 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.1823 | 0.2524 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.1943 | 0.2567 | |
allenai | null | null | null | false | 1 | false | allenai/ms2_dense_mean | 2022-11-05T23:34:01.000Z | multi-document-summarization | false | aeac14e4f697d9b36ffb2c358d5c0235589335b6 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/ms2_dense_mean/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==17`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.2271 | 0.2418 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.2131 | 0.2074 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.2254 | 0.2100 | |
allenai | null | null | null | false | 1 | false | allenai/ms2_dense_oracle | 2022-11-06T21:53:00.000Z | multi-document-summarization | false | b6a4a67125f534559302845c02f99d7599a5ae1a | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"task_categories:summarization",
"task_... | https://huggingface.co/datasets/allenai/ms2_dense_oracle/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever.
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.2395 | 0.2395 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.2125 | 0.2125 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.2224 | 0.2224 | |
allenai | null | null | null | false | 1 | false | allenai/wcep_dense_max | 2022-11-05T22:57:21.000Z | wcep | false | 8c0fb55773fefcdd78156d70fba8067c1e27a65b | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:news-articles-summarization"
] | https://huggingface.co/datasets/allenai/wcep_dense_max/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.5967 | 0.6631 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6040 | 0.6401 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6301 | 0.6740 | |
allenai | null | null | null | false | 2 | false | allenai/wcep_dense_oracle | 2022-11-06T21:49:24.000Z | wcep | false | e603c454733839306a7610a72bba28a992ba778a | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:news-articles-summarization"
] | https://huggingface.co/datasets/allenai/wcep_dense_oracle/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.6490 | 0.6490 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6326 | 0.6326 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6631 | 0.6631 | |
allenai | null | null | null | false | 1 | false | allenai/wcep_dense_mean | 2022-11-05T22:59:38.000Z | wcep | false | fef2989ea08b07094c279b421fb02889b6c37762 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:news-articles-summarization"
] | https://huggingface.co/datasets/allenai/wcep_dense_mean/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.6239 | 0.6271 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6301 | 0.6031 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6564 | 0.6338 | |
maderix | null | null | null | false | 78 | false | maderix/flickr_bw_rgb | 2022-10-12T15:34:25.000Z | null | false | 478bb955bc1365a8a14fd20a98c3505d75f2ba4c | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:N/A",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/maderix/flickr_bw_rgb/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'flickr_bw_rgb'
size_categories:
- n<1K
source_datasets:
- N/A
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Flickr_bw_rgb
_Dataset A image-caption dataset which stores group of black and white and color images with corresponding
captions mentioning the content of the image with a 'colorized photograph of' or 'Black and white photograph of' suffix.
This dataset can then be used for fine-tuning image to text models.. Only a train split is provided.
## Examples
"train/<filename>.jpg" : containing the images in JPEG format
"train/metadata.jsonl" : Contains the metadata and the fields.
Dataset columns:
"file_name"
"caption"
## Citation
If you use this dataset, please cite it as:
```
@misc{maderix2022flickrbwrgb,
author = {maderix: maderix@gmail.com},
title = {flickr_bw_rgb},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/maderix/flickr_bw_rgb/}}
}
``` |
Roderich | null | null | null | false | null | false | Roderich/2nd_testing | 2022-10-24T23:36:52.000Z | null | false | c355349313daf417a5db975c831f10815b9bdef0 | [] | [
"license:other"
] | https://huggingface.co/datasets/Roderich/2nd_testing/resolve/main/README.md | ---
license: other
---
|
debosneed | null | null | null | false | null | false | debosneed/Byzantine_Manuscript | 2022-10-12T16:07:34.000Z | null | false | 624cbce49b88ee29a4cf577f25f68c484b7dcab2 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/debosneed/Byzantine_Manuscript/resolve/main/README.md | ---
license: afl-3.0
---
|
arincon | null | null | null | false | 2 | false | arincon/tapaco-es-paraphrase | 2022-10-12T22:05:32.000Z | null | false | d12015253ccde9b9840a8bc8bb1070c965b449e6 | [] | [] | https://huggingface.co/datasets/arincon/tapaco-es-paraphrase/resolve/main/README.md | tapaco es to paraphrase |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.