author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
feradauto | null | @misc{https://doi.org/10.48550/arxiv.2210.01478,
doi = {10.48550/ARXIV.2210.01478},
url = {https://arxiv.org/abs/2210.01478},
author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
} | We present a novel challenge set consisting of moral exception question answering (MoralExceptQA) of cases that involve potentially permissible moral exceptions. | false | 16 | false | feradauto/MoralExceptQA | 2022-10-27T15:42:04.000Z | null | false | def71b74159a8460ce977fc2ace42e32947fb3fa | [] | [
"arxiv:2210.01478",
"task_categories:text-classification"
] | https://huggingface.co/datasets/feradauto/MoralExceptQA/resolve/main/README.md | ---
pretty_name: MoralExceptQA
task_categories:
- text-classification
---
# Dataset Card for MoralExceptQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MoralCoT](https://github.com/feradauto/MoralCoT)
- **Paper:** [When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment](https://arxiv.org/abs/2210.01478)
- **Point of Contact:** [Fernando Gonzalez](mailto:fgonzalez@ethz.ch) , [Zhijing Jin](mailto:zjin@tue.mpg.de)
### Dataset Summary
Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance is a rule-breaking scenario acompanied by an average human response.
### Data Fields
- `study`: The moral psychology study. Studies were designed to investigate the ability of humans
to figure out when it is permissible to break a previously established or well-known rule.
- `context`: The context of the scenario. Different context within the same study are potentially governed by the same rule.
- `condition`: Condition in the scenario.
- `scenario`: Text description of the scenario.
- `human.response`: Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.
### Data Splits
MoralExceptQA contains one split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Information about the data collection and annotators can be found in the appendix of [our paper](https://arxiv.org/abs/2210.01478).
### Personal and Sensitive Information
The MoralExceptQA dataset does not have privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.
### Discussion of Biases
Our subjects are U.S. residents, and therefore our conclusions are limited to this population.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The MoralExceptQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2210.01478,
doi = {10.48550/ARXIV.2210.01478},
url = {https://arxiv.org/abs/2210.01478},
author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
``` |
CrisPO | null | null | null | false | null | false | CrisPO/Demo_clase_platzi | 2022-10-26T00:57:52.000Z | null | false | 7cb8a114a549fed00b53fa81defc8d5c4203b403 | [] | [
"license:mit"
] | https://huggingface.co/datasets/CrisPO/Demo_clase_platzi/resolve/main/README.md | ---
license: mit
---
|
nishimaki | null | null | null | false | null | false | nishimaki/taiyo | 2022-10-26T02:37:00.000Z | null | false | 441a3cdb297dac28361b08fe2446704dfc528b0a | [] | [
"license:openrail"
] | https://huggingface.co/datasets/nishimaki/taiyo/resolve/main/README.md | ---
license: openrail
---
|
uripper | null | null | null | false | 38 | false | uripper/LichessGames | 2022-10-26T22:11:03.000Z | null | false | 417d3b60cce220f759c1fe59502bba60d71aef56 | [] | [
"license:cc"
] | https://huggingface.co/datasets/uripper/LichessGames/resolve/main/README.md | ---
license: cc
---
|
MarkGG | null | null | null | false | 32 | false | MarkGG/Romance-cleaned-1 | 2022-10-26T03:33:28.000Z | null | false | d017d05d7a9a805bb6cdb2a58abcf1561437011c | [] | [] | https://huggingface.co/datasets/MarkGG/Romance-cleaned-1/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5388007.848468044
num_examples: 6491
- name: validation
num_bytes: 599313.1515319562
num_examples: 722
download_size: 3844960
dataset_size: 5987321.0
---
# Dataset Card for "Romance-cleaned-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064279 | 2022-10-26T04:15:41.000Z | null | false | 2f6f064d3cb82533354f710c230caf18bb7c521c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064279/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064280 | 2022-10-26T04:17:02.000Z | null | false | 465bad23e3af0249144d4497248a2812d90ccc7d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064280/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064281 | 2022-10-26T04:38:02.000Z | null | false | 692c8e1dcabbe24e337357e5624f1ccb2bae92cc | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064281/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164287 | 2022-10-26T04:42:02.000Z | null | false | 25c4f65bb2c90a1c5ea0f5990287fce9529f3ae2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164287/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164288 | 2022-10-26T04:43:01.000Z | null | false | 0eaa9942f56bc4171844477deb35cb3fa3f7585d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164288/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264289 | 2022-10-26T04:40:07.000Z | null | false | a582213b5f1d8c2c0a507ed7fea78a7863351bdc | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264289/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264290 | 2022-10-26T04:40:35.000Z | null | false | 5c5bc05f38b66ceb8f0ef48249ea8f70eeaf6489 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264290/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364291 | 2022-10-26T04:40:21.000Z | null | false | a3b7a1c5b7d2ee5dea4f1016816d4b0a21608ab2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364291/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364292 | 2022-10-26T04:40:50.000Z | null | false | 40cc1ba923431846d9c2a83a5b70843f3fcfaf7a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364292/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
bond005 | null | null | null | false | 46 | false | bond005/sberdevices_golos_100h_farfield | 2022-10-27T04:23:04.000Z | golos | false | c93949f7140beef4adc404e7b54841e957f81c54 | [] | [
"arxiv:2106.10161",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:ru",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification"
] | https://huggingface.co/datasets/bond005/sberdevices_golos_100h_farfield/resolve/main/README.md | ---
pretty_name: Golos
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- other
multilinguality:
- monolingual
paperswithcode_id: golos
size_categories:
- 10K<n<100k
source_datasets:
- extended
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for sberdevices_golos_100h_farfield
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:karpnv@gmail.com)
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 1.22070312e-04, 1.22070312e-04, 9.15527344e-05, ...,
6.10351562e-05, 6.10351562e-05, 3.05175781e-05]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'джой источники истории турции'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 9570 | 933 | 1916 |
| hours | 10.3h | 1.0h | 1.4h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
|
yeeb | null | null | null | false | null | false | yeeb/C50 | 2022-10-26T05:55:06.000Z | null | false | a8395938b476a1cf89b6db79853110ee22616fcc | [] | [
"license:openrail"
] | https://huggingface.co/datasets/yeeb/C50/resolve/main/README.md | ---
license: openrail
---
## Dataset Description
The dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Repository:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** |
hbXNov | null | null | null | false | 1 | false | hbXNov/entigen | 2022-10-26T07:20:22.000Z | null | false | 9f2b30fed6f314b8774d02e290843ecf086b0031 | [] | [] | https://huggingface.co/datasets/hbXNov/entigen/resolve/main/README.md | Relevant Paper - `https://github.com/Hritikbansal/entigen_emnlp`
language of prompts - English |
Dialogue-Model-Research-Group | null | null | V2EX is a dataset curated by https://www.v2ex.com/ open data. | false | null | false | Dialogue-Model-Research-Group/v2ex | 2022-11-15T14:52:02.000Z | null | false | c5e68a003bc53738988b3b44a2134da6e35ce271 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Dialogue-Model-Research-Group/v2ex/resolve/main/README.md | ---
license: cc
dataset_info:
- config_name: topic
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: syntax
dtype: int64
- name: url
dtype: string
- name: replies
dtype: int64
- name: last_reply_by
dtype: string
- name: created
dtype: int64
- name: last_modified
dtype: int64
- name: last_touched
dtype: int64
- name: member
struct:
- name: id
dtype: int64
- name: username
dtype: string
- name: bio
dtype: string
- name: website
dtype: string
- name: github
dtype: string
- name: url
dtype: string
- name: avatar
dtype: string
- name: created
dtype: int64
- name: node
struct:
- name: id
dtype: int64
- name: url
dtype: string
- name: name
dtype: string
- name: title
dtype: string
- name: header
dtype: string
- name: footer
dtype: string
- name: avatar
dtype: string
- name: topics
dtype: int64
- name: created
dtype: int64
- name: last_modified
dtype: int64
- name: supplements
sequence:
- name: id
dtype: int64
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: syntax
dtype: int64
- name: created
dtype: int64
splits:
- name: train
num_bytes: 522790208
num_examples: 262120
download_size: 153558181
dataset_size: 522790208
- config_name: replies
features:
- name: id
dtype: int64
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: created
dtype: int64
- name: member
struct:
- name: id
dtype: int64
- name: username
dtype: string
- name: bio
dtype: string
- name: website
dtype: string
- name: github
dtype: string
- name: url
dtype: string
- name: avatar
dtype: string
- name: created
dtype: int64
- name: topic_id
dtype: int64
splits:
- name: train
num_bytes: 1554954801
num_examples: 3553953
download_size: 462827899
dataset_size: 1554954801
---
|
leslyarun | null | null | null | false | 16 | false | leslyarun/c4_200m_gec_train100k_test25k | 2022-10-26T07:59:31.000Z | null | false | f25e9b73b1ff9fa992e8b07dc68a6e5d09fa70fe | [] | [
"language:en",
"source_datasets:allenai/c4",
"task_categories:text-generation",
"tags:grammatical-error-correction"
] | https://huggingface.co/datasets/leslyarun/c4_200m_gec_train100k_test25k/resolve/main/README.md | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` |
robbye123 | null | null | null | false | null | false | robbye123/images | 2022-10-26T07:55:38.000Z | null | false | e54d38bb908f734558f6e749862d29ccf06d2ce3 | [] | [] | https://huggingface.co/datasets/robbye123/images/resolve/main/README.md | |
juliensimon | null | null | null | false | 27 | false | juliensimon/food102 | 2022-10-26T19:43:21.000Z | null | false | 41c51d1746fa0bd24992037a8a00d68abd21aa76 | [] | [] | https://huggingface.co/datasets/juliensimon/food102/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: apple_pie
1: baby_back_ribs
2: baklava
3: beef_carpaccio
4: beef_tartare
5: beet_salad
6: beignets
7: bibimbap
8: boeuf_bourguignon
9: bread_pudding
10: breakfast_burrito
11: bruschetta
12: caesar_salad
13: cannoli
14: caprese_salad
15: carrot_cake
16: ceviche
17: cheese_plate
18: cheesecake
19: chicken_curry
20: chicken_quesadilla
21: chicken_wings
22: chocolate_cake
23: chocolate_mousse
24: churros
25: clam_chowder
26: club_sandwich
27: crab_cakes
28: creme_brulee
29: croque_madame
30: cup_cakes
31: deviled_eggs
32: donuts
33: dumplings
34: edamame
35: eggs_benedict
36: escargots
37: falafel
38: filet_mignon
39: fish_and_chips
40: foie_gras
41: french_fries
42: french_onion_soup
43: french_toast
44: fried_calamari
45: fried_rice
46: frozen_yogurt
47: garlic_bread
48: gnocchi
49: greek_salad
50: grilled_cheese_sandwich
51: grilled_salmon
52: guacamole
53: gyoza
54: hamburger
55: hot_and_sour_soup
56: hot_dog
57: huevos_rancheros
58: hummus
59: ice_cream
60: lasagna
61: lobster_bisque
62: lobster_roll_sandwich
63: macaroni_and_cheese
64: macarons
65: miso_soup
66: mussels
67: nachos
68: omelette
69: onion_rings
70: oysters
71: pad_thai
72: paella
73: pancakes
74: panna_cotta
75: peking_duck
76: pho
77: pizza
78: pork_chop
79: poutine
80: prime_rib
81: pulled_pork_sandwich
82: ramen
83: ravioli
84: red_velvet_cake
85: risotto
86: samosa
87: sashimi
88: scallops
89: seaweed_salad
90: shrimp_and_grits
91: spaghetti_bolognese
92: spaghetti_carbonara
93: spring_rolls
94: steak
95: strawberry_shortcake
96: sushi
97: tacos
98: takoyaki
99: tiramisu
100: tuna_tartare
101: waffles
splits:
- name: test
num_bytes: 1461368965.25
num_examples: 25500
- name: train
num_bytes: 4285789478.25
num_examples: 76500
download_size: 5534173074
dataset_size: 5747158443.5
---
# Dataset Card for "food102"
This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model.
A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
|
siberspace | null | null | null | false | null | false | siberspace/julie | 2022-10-26T10:22:17.000Z | null | false | 4299936316ce2813f37498d647c3556ed42be2d3 | [] | [] | https://huggingface.co/datasets/siberspace/julie/resolve/main/README.md | |
bond005 | null | null | null | false | 37 | false | bond005/sberdevices_golos_10h_crowd | 2022-10-27T04:42:07.000Z | golos | false | e634b6b810e4d30c81b4c6d8262379fe8b9f708c | [] | [
"arxiv:2106.10161",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:ru",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification"
] | https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd/resolve/main/README.md | ---
pretty_name: Golos
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- other
multilinguality:
- monolingual
paperswithcode_id: golos
size_categories:
- 10K<n<100k
source_datasets:
- extended
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for sberdevices_golos_10h_crowd
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:karpnv@gmail.com)
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 3.05175781e-05, 3.05175781e-05, 0.00000000e+00, ...,
-1.09863281e-03, -7.93457031e-04, -1.52587891e-04]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'шестнадцатая часть сезона пять сериала лемони сникет тридцать три несчастья'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 7993 | 793 | 9994 |
| hours | 8.9h | 0.9h | 11.2h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/winter_style | 2022-10-26T20:45:11.000Z | null | false | fd04a127b3d6801afbe4ba38b66c98d0de647e01 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/winter_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Winter Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by winter_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/oVqfSZ2.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/p0cslGJ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/LJmGvsc.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/T4I0gFQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/hzfmsA8.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
tabcoin | null | null | null | false | null | false | tabcoin/test | 2022-10-28T14:03:32.000Z | null | false | dc798fd72a60febdd4093cccebf885bb1a76d4f7 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/tabcoin/test/resolve/main/README.md | ---
license: openrail
---
|
taln-ls2n | null | \ | KPBiomed benchmark dataset for keyphrase extraction an generation. | false | 3 | false | taln-ls2n/kpbiomed | 2022-10-28T08:37:27.000Z | null | false | e04385895567e9b2ea446b37282f37e8ff436065 | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:en",
"license:cc-by-nc-4.0",
"multilinguality:monolingual",
"task_categories:text-generation",
"task_ids:keyphrase-generation",
"task_ids:keyphrase-extraction",
"size_categories:100K<n<1M"
] | https://huggingface.co/datasets/taln-ls2n/kpbiomed/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KP-Biomed
---
# KPBiomed, A Large-Scale Dataset for keyphrase generation
## About
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. A Large-Scale Dataset for Biomedical Keyphrase Generation. In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The details of the dataset are in the table below:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: |
| Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 |
| Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 |
| Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 |
| Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 |
| Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 |
The following data fields are available:
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles)
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **authors**: list of the article's authors
- **year**: publication year
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
|
ckmai24 | null | null | null | false | 1 | false | ckmai24/ghibil-style | 2022-10-26T13:42:00.000Z | null | false | 219a3339e1995bfbe61f6b1753ebc2a19ac87857 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ckmai24/ghibil-style/resolve/main/README.md | ---
license: afl-3.0
---
|
edbeeching | null | null | null | false | null | false | edbeeching/sample_factory_videos | 2022-11-04T08:00:27.000Z | null | false | 4c4f3977ddd1586764f2bfa883e48d259da7de9a | [] | [
"license:mit"
] | https://huggingface.co/datasets/edbeeching/sample_factory_videos/resolve/main/README.md | ---
license: mit
---
|
YWjimmy | null | null | null | false | null | false | YWjimmy/PeRFception-ScanNet | 2022-10-26T14:56:10.000Z | null | false | 59298c5de4ab4cc1a2bd3522eeb2db35d5fc67aa | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/YWjimmy/PeRFception-ScanNet/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
ScandEval | null | null | null | false | 941 | false | ScandEval/scandiqa-da-mini | 2022-10-26T14:55:55.000Z | null | false | 5b996c11c2566f5ed3d59362a865781881d830fa | [] | [] | https://huggingface.co/datasets/ScandEval/scandiqa-da-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context
dtype: string
- name: answers_en
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_en
dtype: string
- name: title_en
dtype: string
splits:
- name: test
num_bytes: 6637348
num_examples: 2048
- name: train
num_bytes: 3223198
num_examples: 1024
- name: val
num_bytes: 1092295
num_examples: 256
download_size: 6392968
dataset_size: 10952841
---
# Dataset Card for "scandiqa-da-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ScandEval | null | null | null | false | 108 | false | ScandEval/scandiqa-no-mini | 2022-10-26T14:57:02.000Z | null | false | b2fed895f0941a1168d0e309a853f69d29a2d140 | [] | [] | https://huggingface.co/datasets/ScandEval/scandiqa-no-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context
dtype: string
- name: answers_en
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_en
dtype: string
- name: title_en
dtype: string
splits:
- name: test
num_bytes: 6525371
num_examples: 2048
- name: train
num_bytes: 2850103
num_examples: 1024
- name: val
num_bytes: 669384
num_examples: 256
download_size: 5910350
dataset_size: 10044858
---
# Dataset Card for "scandiqa-no-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ScandEval | null | null | null | false | 108 | false | ScandEval/scandiqa-sv-mini | 2022-10-26T14:58:06.000Z | null | false | 702a6dad76adb899f93431c6066bf7f3c751873c | [] | [] | https://huggingface.co/datasets/ScandEval/scandiqa-sv-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context
dtype: string
- name: answers_en
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_en
dtype: string
- name: title_en
dtype: string
splits:
- name: test
num_bytes: 6230235
num_examples: 2048
- name: train
num_bytes: 2789113
num_examples: 1024
- name: val
num_bytes: 658362
num_examples: 256
download_size: 5839591
dataset_size: 9677710
---
# Dataset Card for "scandiqa-sv-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jhworth8 | null | null | null | false | null | false | jhworth8/baileycardosi | 2022-10-26T16:01:24.000Z | null | false | acc530784fffdad35ed44f22b40f1e6a366318a3 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/jhworth8/baileycardosi/resolve/main/README.md | ---
license: apache-2.0
---
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/brush_style | 2022-10-29T10:50:13.000Z | null | false | 13f26365766f8f61eea21bf45d65936aaaa70db8 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/brush_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Brush Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by brush_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Mp2F6GR.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/a2Cmqb4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/YwSafu4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/fCFSIs5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/S8v6sXG.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
CenIA | null | null | null | false | null | false | CenIA/laiones150m | 2022-10-26T17:56:08.000Z | null | false | 5b5a4956aa28fb2cc25fca717c2dacd00a97e4ba | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/CenIA/laiones150m/resolve/main/README.md | ---
license: cc-by-4.0
---
|
woctordho | null | null | null | false | 1 | false | woctordho/img-256-shinkai-2 | 2022-10-26T18:21:18.000Z | null | false | 7072793eff816ebdf7a6b6bc747071e9f81e3a30 | [] | [] | https://huggingface.co/datasets/woctordho/img-256-shinkai-2/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '000'
1: '001'
2: '002'
3: '003'
4: '004'
5: '005'
6: '006'
7: '007'
8: 008
9: 009
10: '010'
11: '011'
12: '012'
13: '013'
14: '014'
15: '015'
16: '016'
17: '017'
18: 018
19: 019
20: '020'
21: '021'
22: '022'
23: '023'
24: '024'
25: '025'
26: '026'
27: '027'
28: 028
29: 029
30: '030'
31: '031'
32: '032'
33: '033'
34: '034'
35: '035'
36: '036'
37: '037'
38: 038
39: 039
40: '040'
41: '041'
42: '042'
43: '043'
44: '044'
45: '045'
46: '046'
47: '047'
48: 048
49: 049
50: '050'
51: '051'
52: '052'
53: '053'
54: '054'
55: '055'
56: '056'
57: '057'
58: 058
59: 059
60: '060'
61: '061'
62: '062'
63: '063'
64: '064'
65: '065'
66: '066'
67: '067'
68: 068
69: 069
70: '070'
71: '071'
72: '072'
73: '073'
74: '074'
75: '075'
76: '076'
77: '077'
78: 078
79: 079
80: 080
81: 081
82: 082
83: 083
84: 084
85: 085
86: 086
87: 087
88: 088
89: 089
90: 090
91: 091
92: 092
93: 093
94: 094
95: 095
96: 096
97: 097
98: 098
99: 099
splits:
- name: train
num_bytes: 15674516006.68
num_examples: 811410
download_size: 11658988354
dataset_size: 15674516006.68
---
# Dataset Card for "img-256-shinkai-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chattermill | null | null | null | false | 5 | false | chattermill/crabsa | 2022-11-01T19:51:01.000Z | null | false | f578c212cc348679720516b65fd4317223206bf1 | [] | [
"license:mit"
] | https://huggingface.co/datasets/chattermill/crabsa/resolve/main/README.md | ---
license: mit
---
|
Aserehe6546545 | null | null | null | false | null | false | Aserehe6546545/Ghgfgg | 2022-10-26T19:22:13.000Z | null | false | 61f4efc23daf87b98918ca90c359e9bb8f92a900 | [] | [] | https://huggingface.co/datasets/Aserehe6546545/Ghgfgg/resolve/main/README.md | Cómo reclamar los daños después de un apagón eléctrico: las indemnizaciones que debe costear la empresa tras cortar el suministro |
woctordho | null | null | null | false | 4 | false | woctordho/img-256-danbooru | 2022-10-26T20:48:32.000Z | null | false | bdb6bf09f2df09ae595ddd27bdf8267adc656add | [] | [] | https://huggingface.co/datasets/woctordho/img-256-danbooru/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '000'
1: '001'
2: '002'
3: '003'
4: '004'
5: '005'
6: '006'
7: '007'
8: 008
9: 009
10: '010'
11: '011'
12: '012'
13: '013'
14: '014'
15: '015'
16: '016'
17: '017'
18: 018
19: 019
20: '020'
21: '021'
22: '022'
23: '023'
24: '024'
25: '025'
26: '026'
27: '027'
28: 028
29: 029
30: '030'
31: '031'
32: '032'
33: '033'
34: '034'
35: '035'
36: '036'
37: '037'
38: 038
39: 039
40: '040'
41: '041'
42: '042'
43: '043'
44: '044'
45: '045'
46: '046'
47: '047'
48: 048
49: 049
50: '050'
51: '051'
52: '052'
53: '053'
54: '054'
55: '055'
56: '056'
57: '057'
58: 058
59: 059
60: '060'
61: '061'
62: '062'
63: '063'
64: '064'
65: '065'
66: '066'
67: '067'
68: 068
69: 069
70: '070'
71: '071'
72: '072'
73: '073'
74: '074'
75: '075'
76: '076'
77: '077'
78: 078
79: 079
80: 080
81: 081
82: 082
83: 083
84: 084
85: 085
86: 086
87: 087
88: 088
89: 089
90: 090
91: 091
92: 092
93: 093
94: 094
95: 095
96: 096
97: 097
98: 098
99: 099
100: '100'
101: '101'
102: '102'
103: '103'
104: '104'
105: '105'
106: '106'
107: '107'
108: '108'
109: '109'
110: '110'
111: '111'
112: '112'
113: '113'
114: '114'
115: '115'
116: '116'
117: '117'
118: '118'
119: '119'
120: '120'
121: '121'
122: '122'
123: '123'
124: '124'
125: '125'
126: '126'
127: '127'
128: '128'
129: '129'
130: '130'
131: '131'
132: '132'
133: '133'
134: '134'
135: '135'
136: '136'
137: '137'
138: '138'
139: '139'
140: '140'
141: '141'
142: '142'
143: '143'
144: '144'
145: '145'
146: '146'
147: '147'
148: '148'
149: '149'
150: '150'
151: '151'
152: '152'
153: '153'
154: '154'
155: '155'
156: '156'
157: '157'
158: '158'
159: '159'
160: '160'
161: '161'
162: '162'
163: '163'
164: '164'
165: '165'
166: '166'
167: '167'
168: '168'
169: '169'
170: '170'
171: '171'
172: '172'
173: '173'
174: '174'
175: '175'
176: '176'
177: '177'
178: '178'
179: '179'
180: '180'
181: '181'
182: '182'
183: '183'
184: '184'
185: '185'
186: '186'
187: '187'
188: '188'
189: '189'
190: '190'
191: '191'
192: '192'
193: '193'
194: '194'
195: '195'
196: '196'
197: '197'
198: '198'
199: '199'
200: '200'
201: '201'
202: '202'
203: '203'
204: '204'
205: '205'
206: '206'
207: '207'
208: '208'
209: '209'
210: '210'
211: '211'
212: '212'
213: '213'
214: '214'
215: '215'
216: '216'
217: '217'
218: '218'
219: '219'
220: '220'
221: '221'
222: '222'
223: '223'
224: '224'
225: '225'
226: '226'
227: '227'
228: '228'
229: '229'
230: '230'
231: '231'
232: '232'
233: '233'
234: '234'
235: '235'
236: '236'
237: '237'
238: '238'
239: '239'
240: '240'
241: '241'
242: '242'
243: '243'
244: '244'
245: '245'
246: '246'
247: '247'
248: '248'
249: '249'
250: '250'
251: '251'
252: '252'
253: '253'
254: '254'
255: '255'
256: '256'
257: '257'
258: '258'
259: '259'
260: '260'
261: '261'
262: '262'
263: '263'
264: '264'
265: '265'
266: '266'
267: '267'
268: '268'
269: '269'
270: '270'
271: '271'
272: '272'
273: '273'
274: '274'
275: '275'
276: '276'
277: '277'
278: '278'
279: '279'
280: '280'
281: '281'
282: '282'
283: '283'
284: '284'
285: '285'
286: '286'
287: '287'
288: '288'
289: '289'
290: '290'
291: '291'
292: '292'
293: '293'
294: '294'
295: '295'
296: '296'
297: '297'
298: '298'
299: '299'
300: '300'
301: '301'
302: '302'
303: '303'
304: '304'
305: '305'
306: '306'
307: '307'
308: '308'
309: '309'
310: '310'
311: '311'
312: '312'
313: '313'
314: '314'
315: '315'
316: '316'
317: '317'
318: '318'
319: '319'
320: '320'
321: '321'
322: '322'
323: '323'
324: '324'
325: '325'
326: '326'
327: '327'
328: '328'
329: '329'
330: '330'
331: '331'
332: '332'
333: '333'
334: '334'
335: '335'
336: '336'
337: '337'
338: '338'
339: '339'
340: '340'
341: '341'
342: '342'
343: '343'
344: '344'
345: '345'
346: '346'
347: '347'
348: '348'
349: '349'
350: '350'
351: '351'
352: '352'
353: '353'
354: '354'
355: '355'
356: '356'
357: '357'
358: '358'
359: '359'
360: '360'
361: '361'
362: '362'
363: '363'
364: '364'
365: '365'
366: '366'
367: '367'
368: '368'
369: '369'
370: '370'
371: '371'
372: '372'
373: '373'
374: '374'
375: '375'
376: '376'
377: '377'
378: '378'
379: '379'
380: '380'
381: '381'
382: '382'
383: '383'
384: '384'
385: '385'
386: '386'
387: '387'
388: '388'
389: '389'
390: '390'
391: '391'
392: '392'
393: '393'
394: '394'
395: '395'
396: '396'
397: '397'
398: '398'
399: '399'
400: '400'
401: '401'
402: '402'
403: '403'
404: '404'
405: '405'
406: '406'
407: '407'
408: '408'
409: '409'
410: '410'
411: '411'
412: '412'
413: '413'
414: '414'
415: '415'
416: '416'
417: '417'
418: '418'
419: '419'
420: '420'
421: '421'
422: '422'
423: '423'
424: '424'
425: '425'
426: '426'
427: '427'
428: '428'
429: '429'
430: '430'
431: '431'
432: '432'
433: '433'
434: '434'
435: '435'
436: '436'
437: '437'
438: '438'
439: '439'
440: '440'
441: '441'
442: '442'
443: '443'
444: '444'
445: '445'
446: '446'
447: '447'
448: '448'
449: '449'
450: '450'
451: '451'
452: '452'
453: '453'
454: '454'
455: '455'
456: '456'
457: '457'
458: '458'
459: '459'
460: '460'
461: '461'
462: '462'
463: '463'
464: '464'
465: '465'
466: '466'
467: '467'
468: '468'
469: '469'
470: '470'
471: '471'
472: '472'
473: '473'
474: '474'
475: '475'
476: '476'
477: '477'
478: '478'
479: '479'
480: '480'
481: '481'
482: '482'
483: '483'
484: '484'
485: '485'
486: '486'
487: '487'
488: '488'
489: '489'
490: '490'
491: '491'
492: '492'
493: '493'
494: '494'
495: '495'
496: '496'
497: '497'
498: '498'
499: '499'
500: '500'
501: '501'
502: '502'
503: '503'
504: '504'
505: '505'
506: '506'
507: '507'
508: '508'
509: '509'
510: '510'
511: '511'
512: '512'
513: '513'
514: '514'
515: '515'
516: '516'
517: '517'
518: '518'
519: '519'
520: '520'
521: '521'
522: '522'
523: '523'
524: '524'
525: '525'
526: '526'
527: '527'
528: '528'
529: '529'
530: '530'
531: '531'
532: '532'
533: '533'
534: '534'
535: '535'
536: '536'
537: '537'
538: '538'
539: '539'
540: '540'
541: '541'
542: '542'
543: '543'
544: '544'
545: '545'
546: '546'
547: '547'
548: '548'
549: '549'
550: '550'
551: '551'
552: '552'
553: '553'
554: '554'
555: '555'
556: '556'
557: '557'
558: '558'
559: '559'
560: '560'
561: '561'
562: '562'
563: '563'
564: '564'
565: '565'
566: '566'
567: '567'
568: '568'
569: '569'
570: '570'
571: '571'
572: '572'
573: '573'
574: '574'
575: '575'
576: '576'
577: '577'
578: '578'
579: '579'
580: '580'
581: '581'
582: '582'
583: '583'
584: '584'
585: '585'
586: '586'
587: '587'
588: '588'
589: '589'
590: '590'
591: '591'
592: '592'
593: '593'
594: '594'
595: '595'
596: '596'
597: '597'
598: '598'
599: '599'
600: '600'
601: '601'
602: '602'
603: '603'
604: '604'
605: '605'
606: '606'
607: '607'
608: '608'
609: '609'
610: '610'
611: '611'
612: '612'
613: '613'
614: '614'
615: '615'
616: '616'
617: '617'
618: '618'
619: '619'
620: '620'
621: '621'
622: '622'
623: '623'
624: '624'
625: '625'
626: '626'
627: '627'
628: '628'
629: '629'
630: '630'
631: '631'
632: '632'
633: '633'
634: '634'
635: '635'
636: '636'
637: '637'
638: '638'
639: '639'
640: '640'
641: '641'
642: '642'
643: '643'
644: '644'
645: '645'
646: '646'
647: '647'
648: '648'
649: '649'
650: '650'
651: '651'
652: '652'
653: '653'
654: '654'
655: '655'
656: '656'
657: '657'
658: '658'
659: '659'
660: '660'
661: '661'
662: '662'
663: '663'
664: '664'
665: '665'
666: '666'
667: '667'
668: '668'
669: '669'
670: '670'
671: '671'
672: '672'
673: '673'
674: '674'
675: '675'
676: '676'
677: '677'
678: '678'
679: '679'
680: '680'
681: '681'
682: '682'
683: '683'
684: '684'
685: '685'
686: '686'
687: '687'
688: '688'
689: '689'
690: '690'
691: '691'
692: '692'
693: '693'
694: '694'
695: '695'
696: '696'
697: '697'
698: '698'
699: '699'
700: '700'
701: '701'
702: '702'
703: '703'
704: '704'
705: '705'
706: '706'
707: '707'
708: '708'
709: '709'
710: '710'
711: '711'
712: '712'
713: '713'
714: '714'
715: '715'
716: '716'
717: '717'
718: '718'
719: '719'
720: '720'
721: '721'
722: '722'
723: '723'
724: '724'
725: '725'
726: '726'
727: '727'
728: '728'
729: '729'
730: '730'
731: '731'
732: '732'
733: '733'
734: '734'
735: '735'
736: '736'
737: '737'
738: '738'
739: '739'
740: '740'
741: '741'
742: '742'
743: '743'
744: '744'
745: '745'
746: '746'
747: '747'
748: '748'
749: '749'
750: '750'
751: '751'
752: '752'
753: '753'
754: '754'
755: '755'
756: '756'
757: '757'
758: '758'
759: '759'
760: '760'
761: '761'
762: '762'
763: '763'
764: '764'
765: '765'
766: '766'
767: '767'
768: '768'
769: '769'
770: '770'
771: '771'
772: '772'
773: '773'
774: '774'
775: '775'
776: '776'
777: '777'
778: '778'
779: '779'
780: '780'
781: '781'
782: '782'
783: '783'
784: '784'
785: '785'
786: '786'
787: '787'
788: '788'
789: '789'
790: '790'
791: '791'
792: '792'
793: '793'
794: '794'
795: '795'
796: '796'
797: '797'
798: '798'
799: '799'
800: '800'
801: '801'
802: '802'
803: '803'
804: '804'
805: '805'
806: '806'
807: '807'
808: '808'
809: '809'
810: '810'
811: '811'
812: '812'
813: '813'
814: '814'
815: '815'
816: '816'
817: '817'
818: '818'
819: '819'
820: '820'
821: '821'
822: '822'
823: '823'
824: '824'
825: '825'
826: '826'
827: '827'
828: '828'
829: '829'
830: '830'
831: '831'
832: '832'
833: '833'
834: '834'
835: '835'
836: '836'
837: '837'
838: '838'
839: '839'
840: '840'
841: '841'
842: '842'
843: '843'
844: '844'
845: '845'
846: '846'
847: '847'
848: '848'
849: '849'
850: '850'
851: '851'
852: '852'
853: '853'
854: '854'
855: '855'
856: '856'
857: '857'
858: '858'
859: '859'
860: '860'
861: '861'
862: '862'
863: '863'
864: '864'
865: '865'
866: '866'
867: '867'
868: '868'
869: '869'
870: '870'
871: '871'
872: '872'
873: '873'
874: '874'
875: '875'
876: '876'
877: '877'
878: '878'
879: '879'
880: '880'
881: '881'
882: '882'
883: '883'
884: '884'
885: '885'
886: '886'
887: '887'
888: '888'
889: '889'
890: '890'
891: '891'
892: '892'
893: '893'
894: '894'
895: '895'
896: '896'
897: '897'
898: '898'
899: '899'
900: '900'
901: '901'
902: '902'
903: '903'
904: '904'
905: '905'
906: '906'
907: '907'
908: '908'
909: '909'
910: '910'
911: '911'
912: '912'
913: '913'
914: '914'
915: '915'
916: '916'
917: '917'
918: '918'
919: '919'
920: '920'
921: '921'
922: '922'
923: '923'
924: '924'
925: '925'
926: '926'
927: '927'
928: '928'
929: '929'
930: '930'
931: '931'
932: '932'
933: '933'
934: '934'
935: '935'
936: '936'
937: '937'
938: '938'
939: '939'
940: '940'
941: '941'
942: '942'
943: '943'
944: '944'
945: '945'
946: '946'
947: '947'
948: '948'
949: '949'
950: '950'
951: '951'
952: '952'
953: '953'
954: '954'
955: '955'
956: '956'
957: '957'
958: '958'
959: '959'
960: '960'
961: '961'
962: '962'
963: '963'
964: '964'
965: '965'
966: '966'
967: '967'
968: '968'
969: '969'
970: '970'
971: '971'
972: '972'
973: '973'
974: '974'
975: '975'
976: '976'
977: '977'
978: '978'
979: '979'
980: '980'
981: '981'
982: '982'
983: '983'
984: '984'
985: '985'
986: '986'
987: '987'
988: '988'
989: '989'
990: '990'
991: '991'
992: '992'
993: '993'
994: '994'
995: '995'
996: '996'
997: '997'
998: '998'
999: '999'
splits:
- name: train
num_bytes: 23623344847.77
num_examples: 990501
download_size: 23097858671
dataset_size: 23623344847.77
---
# Dataset Card for "img-256-danbooru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
f-biondi | null | null | null | false | 19 | false | f-biondi/shape-scenes | 2022-10-26T20:27:10.000Z | null | false | ac2f44906b2ed4f46bf547b7db4c055cb10b601b | [] | [] | https://huggingface.co/datasets/f-biondi/shape-scenes/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 558709806.875
num_examples: 97881
download_size: 317164682
dataset_size: 558709806.875
---
# Dataset Card for "shape-scenes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RaphaelOlivier | null | null | Adversarial examples fooling whisper models | false | 7 | false | RaphaelOlivier/whisper_adversarial_examples | 2022-11-03T21:48:16.000Z | null | false | fd3366545ad353723966836cc25f1ed10b7ef355 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/RaphaelOlivier/whisper_adversarial_examples/resolve/main/README.md | ---
license: cc-by-4.0
---
# Description
This dataset is a subset of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and Multilingual [CommonVoice](commonvoice.mozilla.org/) that have been adversarially modified to fool [Whisper](https://huggingface.co/openai/whisper-medium) ASR model.
Original [source code](https://github.com/RaphaelOlivier/whisper_attack).
The raw [tar files](https://data.mendeley.com/datasets/96dh52hz9r).
# Configurations and splits
* The `targeted` config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence `OK Google, browse to evil.com` even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `untargeted-35` and `untargeted-40` configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split `<lang>.<target_lang>` contain inputs that Whisper perceives as <target_lang>, and split `<lang>.original` contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).
# Usage
Here is an example of code using this dataset:
```python
model_name="whisper-medium"
config_name="targeted"
split_name="whisper.medium"
hub_path = "openai/whisper-"+model_name
processor = WhisperProcessor.from_pretrained(hub_path)
model = WhisperForConditionalGeneration.from_pretrained(hub_path).to("cuda")
dataset = load_dataset("RaphaelOlivier/whisper_adversarial_examples",config_name ,split=split_name)
def map_to_pred(batch):
input_features = processor(batch["audio"][0]["array"], return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"))
transcription = processor.batch_decode(predicted_ids, normalize = True)
batch['text'][0] = processor.tokenizer._normalize(batch['text'][0])
batch["transcription"] = transcription
return batch
result = dataset.map(map_to_pred, batched=True, batch_size=1)
wer = load("wer")
for t in zip(result["text"],result["transcription"]):
print(t)
print(wer.compute(predictions=result["text"], references=result["transcription"]))
``` |
woctordho | null | null | null | false | null | false | woctordho/img-256-photo-2 | 2022-10-26T21:48:29.000Z | null | false | e705b3f7ddb9b380a94ded7ce9f62aea805ed733 | [] | [] | https://huggingface.co/datasets/woctordho/img-256-photo-2/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '000'
1: '001'
2: '002'
3: '003'
4: '004'
5: '005'
6: '006'
7: '007'
8: 008
9: 009
10: '010'
11: '011'
12: '012'
13: '013'
14: '014'
15: '015'
16: '016'
17: '017'
18: 018
19: 019
20: '020'
21: '021'
22: '022'
23: '023'
24: '024'
25: '025'
26: '026'
27: '027'
28: 028
29: 029
30: '030'
31: '031'
32: '032'
33: '033'
34: '034'
35: '035'
36: '036'
37: '037'
38: 038
39: 039
40: '040'
41: '041'
42: '042'
43: '043'
44: '044'
45: '045'
46: '046'
47: '047'
48: 048
49: 049
50: '050'
51: '051'
52: '052'
53: '053'
54: '054'
55: '055'
56: '056'
57: '057'
58: 058
59: 059
60: '060'
61: '061'
62: '062'
63: '063'
64: '064'
65: '065'
66: '066'
67: '067'
68: 068
69: 069
70: '070'
71: '071'
72: '072'
73: '073'
74: '074'
75: '075'
76: '076'
77: '077'
78: 078
79: 079
80: 080
81: 081
82: 082
83: 083
84: 084
85: 085
86: 086
87: 087
88: 088
89: 089
90: 090
91: 091
92: 092
93: 093
94: 094
95: 095
96: 096
97: 097
98: 098
99: 099
100: '100'
101: '101'
102: '102'
103: '103'
104: '104'
105: '105'
106: '106'
107: '107'
108: '108'
109: '109'
110: '110'
111: '111'
112: '112'
113: '113'
114: '114'
115: '115'
116: '116'
117: '117'
118: '118'
119: '119'
120: '120'
121: '121'
122: '122'
123: '123'
124: '124'
125: '125'
126: '126'
127: '127'
128: '128'
129: '129'
130: '130'
131: '131'
132: '132'
133: '133'
134: '134'
135: '135'
136: '136'
137: '137'
138: '138'
139: '139'
140: '140'
141: '141'
142: '142'
143: '143'
144: '144'
145: '145'
146: '146'
147: '147'
148: '148'
149: '149'
150: '150'
151: '151'
152: '152'
153: '153'
154: '154'
155: '155'
156: '156'
157: '157'
158: '158'
159: '159'
160: '160'
161: '161'
162: '162'
163: '163'
164: '164'
165: '165'
166: '166'
167: '167'
168: '168'
169: '169'
170: '170'
171: '171'
172: '172'
173: '173'
174: '174'
175: '175'
176: '176'
177: '177'
178: '178'
179: '179'
180: '180'
181: '181'
182: '182'
183: '183'
184: '184'
185: '185'
186: '186'
187: '187'
188: '188'
189: '189'
190: '190'
191: '191'
192: '192'
193: '193'
194: '194'
195: '195'
196: '196'
197: '197'
198: '198'
199: '199'
200: '200'
201: '201'
202: '202'
203: '203'
204: '204'
205: '205'
206: '206'
207: '207'
208: '208'
209: '209'
210: '210'
211: '211'
212: '212'
213: '213'
214: '214'
215: '215'
216: '216'
217: '217'
218: '218'
219: '219'
220: '220'
221: '221'
222: '222'
223: '223'
224: '224'
225: '225'
226: '226'
227: '227'
228: '228'
229: '229'
230: '230'
231: '231'
232: '232'
233: '233'
234: '234'
235: '235'
236: '236'
237: '237'
238: '238'
239: '239'
240: '240'
241: '241'
242: '242'
243: '243'
244: '244'
245: '245'
246: '246'
247: '247'
248: '248'
249: '249'
250: '250'
251: '251'
252: '252'
253: '253'
254: '254'
255: '255'
256: '256'
257: '257'
258: '258'
259: '259'
260: '260'
261: '261'
262: '262'
263: '263'
264: '264'
265: '265'
266: '266'
267: '267'
268: '268'
269: '269'
270: '270'
271: '271'
272: '272'
273: '273'
274: '274'
275: '275'
276: '276'
277: '277'
278: '278'
279: '279'
280: '280'
281: '281'
282: '282'
283: '283'
284: '284'
285: '285'
286: '286'
287: '287'
288: '288'
289: '289'
290: '290'
291: '291'
292: '292'
293: '293'
294: '294'
295: '295'
296: '296'
297: '297'
298: '298'
299: '299'
300: '300'
301: '301'
302: '302'
303: '303'
304: '304'
305: '305'
306: '306'
307: '307'
308: '308'
309: '309'
310: '310'
311: '311'
312: '312'
313: '313'
314: '314'
315: '315'
316: '316'
317: '317'
318: '318'
319: '319'
320: '320'
321: '321'
322: '322'
323: '323'
324: '324'
325: '325'
326: '326'
327: '327'
328: '328'
329: '329'
330: '330'
331: '331'
332: '332'
333: '333'
334: '334'
335: '335'
336: '336'
337: '337'
338: '338'
339: '339'
340: '340'
341: '341'
342: '342'
343: '343'
344: '344'
345: '345'
346: '346'
347: '347'
348: '348'
349: '349'
350: '350'
351: '351'
352: '352'
353: '353'
354: '354'
355: '355'
356: '356'
357: '357'
358: '358'
359: '359'
360: '360'
361: '361'
362: '362'
363: '363'
364: '364'
365: '365'
366: '366'
367: '367'
368: '368'
369: '369'
370: '370'
371: '371'
372: '372'
373: '373'
374: '374'
375: '375'
376: '376'
377: '377'
378: '378'
379: '379'
380: '380'
381: '381'
382: '382'
383: '383'
384: '384'
385: '385'
386: '386'
387: '387'
388: '388'
389: '389'
390: '390'
391: '391'
392: '392'
393: '393'
394: '394'
395: '395'
396: '396'
397: '397'
398: '398'
399: '399'
400: '400'
401: '401'
402: '402'
403: '403'
404: '404'
405: '405'
406: '406'
407: '407'
408: '408'
409: '409'
410: '410'
411: '411'
412: '412'
413: '413'
414: '414'
415: '415'
416: '416'
417: '417'
418: '418'
419: '419'
420: '420'
421: '421'
422: '422'
423: '423'
424: '424'
425: '425'
426: '426'
427: '427'
428: '428'
429: '429'
430: '430'
431: '431'
432: '432'
433: '433'
434: '434'
435: '435'
436: '436'
437: '437'
438: '438'
439: '439'
440: '440'
441: '441'
442: '442'
443: '443'
444: '444'
445: '445'
446: '446'
447: '447'
448: '448'
449: '449'
450: '450'
451: '451'
452: '452'
453: '453'
454: '454'
455: '455'
456: '456'
457: '457'
458: '458'
459: '459'
460: '460'
461: '461'
462: '462'
463: '463'
464: '464'
465: '465'
466: '466'
467: '467'
468: '468'
469: '469'
470: '470'
471: '471'
472: '472'
473: '473'
474: '474'
475: '475'
476: '476'
477: '477'
478: '478'
479: '479'
480: '480'
481: '481'
482: '482'
483: '483'
484: '484'
485: '485'
486: '486'
487: '487'
488: '488'
489: '489'
490: '490'
491: '491'
492: '492'
493: '493'
494: '494'
495: '495'
496: '496'
497: '497'
498: '498'
499: '499'
500: '500'
501: '501'
502: '502'
503: '503'
504: '504'
505: '505'
506: '506'
507: '507'
508: '508'
509: '509'
510: '510'
511: '511'
512: '512'
513: '513'
514: '514'
515: '515'
516: '516'
517: '517'
518: '518'
519: '519'
520: '520'
521: '521'
522: '522'
523: '523'
524: '524'
525: '525'
526: '526'
527: '527'
528: '528'
529: '529'
530: '530'
531: '531'
532: '532'
533: '533'
534: '534'
535: '535'
536: '536'
537: '537'
538: '538'
539: '539'
540: '540'
541: '541'
542: '542'
543: '543'
544: '544'
545: '545'
546: '546'
547: '547'
548: '548'
549: '549'
550: '550'
551: '551'
552: '552'
553: '553'
554: '554'
555: '555'
556: '556'
557: '557'
558: '558'
559: '559'
560: '560'
561: '561'
562: '562'
563: '563'
564: '564'
565: '565'
566: '566'
567: '567'
568: '568'
569: '569'
570: '570'
571: '571'
572: '572'
573: '573'
574: '574'
575: '575'
576: '576'
577: '577'
578: '578'
579: '579'
580: '580'
581: '581'
582: '582'
583: '583'
584: '584'
585: '585'
586: '586'
587: '587'
588: '588'
589: '589'
590: '590'
591: '591'
592: '592'
593: '593'
594: '594'
595: '595'
596: '596'
597: '597'
598: '598'
599: '599'
600: '600'
601: '601'
602: '602'
603: '603'
604: '604'
605: '605'
606: '606'
607: '607'
608: '608'
609: '609'
610: '610'
611: '611'
612: '612'
613: '613'
614: '614'
615: '615'
616: '616'
617: '617'
618: '618'
619: '619'
620: '620'
621: '621'
622: '622'
623: '623'
624: '624'
625: '625'
626: '626'
627: '627'
628: '628'
629: '629'
630: '630'
631: '631'
632: '632'
633: '633'
634: '634'
635: '635'
636: '636'
637: '637'
638: '638'
639: '639'
640: '640'
641: '641'
642: '642'
643: '643'
644: '644'
645: '645'
646: '646'
647: '647'
648: '648'
649: '649'
650: '650'
651: '651'
652: '652'
653: '653'
654: '654'
655: '655'
656: '656'
657: '657'
658: '658'
659: '659'
660: '660'
661: '661'
662: '662'
663: '663'
664: '664'
665: '665'
666: '666'
667: '667'
668: '668'
669: '669'
670: '670'
671: '671'
672: '672'
673: '673'
674: '674'
675: '675'
676: '676'
677: '677'
678: '678'
679: '679'
680: '680'
681: '681'
682: '682'
683: '683'
684: '684'
685: '685'
686: '686'
687: '687'
688: '688'
689: '689'
690: '690'
691: '691'
692: '692'
693: '693'
694: '694'
695: '695'
696: '696'
697: '697'
698: '698'
699: '699'
700: '700'
701: '701'
702: '702'
703: '703'
704: '704'
705: '705'
706: '706'
707: '707'
708: '708'
709: '709'
710: '710'
711: '711'
712: '712'
713: '713'
714: '714'
715: '715'
716: '716'
717: '717'
718: '718'
719: '719'
720: '720'
721: '721'
722: '722'
723: '723'
724: '724'
725: '725'
726: '726'
727: '727'
728: '728'
729: '729'
730: '730'
731: '731'
732: '732'
733: '733'
734: '734'
735: '735'
736: '736'
737: '737'
738: '738'
739: '739'
740: '740'
741: '741'
742: '742'
743: '743'
744: '744'
745: '745'
746: '746'
747: '747'
748: '748'
749: '749'
750: '750'
751: '751'
752: '752'
753: '753'
754: '754'
755: '755'
756: '756'
757: '757'
758: '758'
759: '759'
760: '760'
761: '761'
762: '762'
763: '763'
764: '764'
765: '765'
766: '766'
767: '767'
768: '768'
769: '769'
770: '770'
771: '771'
772: '772'
773: '773'
774: '774'
775: '775'
776: '776'
777: '777'
778: '778'
779: '779'
780: '780'
781: '781'
782: '782'
783: '783'
784: '784'
785: '785'
786: '786'
787: '787'
788: '788'
789: '789'
790: '790'
791: '791'
792: '792'
793: '793'
794: '794'
795: '795'
796: '796'
797: '797'
798: '798'
799: '799'
800: '800'
801: '801'
802: '802'
803: '803'
804: '804'
805: '805'
806: '806'
807: '807'
808: '808'
809: '809'
810: '810'
811: '811'
812: '812'
813: '813'
814: '814'
815: '815'
816: '816'
817: '817'
818: '818'
819: '819'
820: '820'
821: '821'
822: '822'
823: '823'
824: '824'
825: '825'
826: '826'
827: '827'
828: '828'
829: '829'
830: '830'
831: '831'
832: '832'
833: '833'
834: '834'
835: '835'
836: '836'
837: '837'
838: '838'
839: '839'
840: '840'
841: '841'
842: '842'
843: '843'
844: '844'
845: '845'
846: '846'
847: '847'
848: '848'
849: '849'
850: '850'
851: '851'
852: '852'
853: '853'
854: '854'
855: '855'
856: '856'
857: '857'
858: '858'
859: '859'
860: '860'
861: '861'
862: '862'
863: '863'
864: '864'
865: '865'
866: '866'
867: '867'
868: '868'
869: '869'
870: '870'
871: '871'
872: '872'
873: '873'
874: '874'
875: '875'
876: '876'
877: '877'
878: '878'
879: '879'
880: '880'
881: '881'
882: '882'
883: '883'
884: '884'
885: '885'
886: '886'
887: '887'
888: '888'
889: '889'
890: '890'
891: '891'
892: '892'
893: '893'
894: '894'
895: '895'
896: '896'
897: '897'
898: '898'
899: '899'
900: '900'
901: '901'
902: '902'
903: '903'
904: '904'
905: '905'
906: '906'
907: '907'
908: '908'
909: '909'
910: '910'
911: '911'
912: '912'
913: '913'
914: '914'
915: '915'
916: '916'
917: '917'
918: '918'
919: '919'
920: '920'
921: '921'
922: '922'
923: '923'
924: '924'
925: '925'
926: '926'
927: '927'
928: '928'
929: '929'
930: '930'
931: '931'
932: '932'
933: '933'
934: '934'
935: '935'
936: '936'
937: '937'
938: '938'
939: '939'
940: '940'
941: '941'
942: '942'
943: '943'
944: '944'
945: '945'
946: '946'
947: '947'
948: '948'
949: '949'
950: '950'
951: '951'
952: '952'
953: '953'
954: '954'
955: '955'
956: '956'
957: '957'
958: '958'
959: '959'
960: '960'
961: '961'
962: '962'
963: '963'
964: '964'
965: '965'
966: '966'
967: '967'
968: '968'
969: '969'
970: '970'
971: '971'
972: '972'
973: '973'
974: '974'
975: '975'
976: '976'
977: '977'
978: '978'
979: '979'
980: '980'
981: '981'
982: '982'
983: '983'
984: '984'
985: '985'
986: '986'
987: '987'
988: '988'
989: '989'
990: '990'
991: '991'
992: '992'
993: '993'
994: '994'
995: '995'
996: '996'
997: '997'
998: '998'
999: '999'
splits:
- name: train
num_bytes: 12194184407.684
num_examples: 996698
download_size: 11922345513
dataset_size: 12194184407.684
---
# Dataset Card for "img-256-photo-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kejian | null | null | null | false | 4 | false | kejian/codeparrot-valid-more-filtering-debug | 2022-10-26T21:22:00.000Z | null | false | ce79dcfb8e000cbac80111f73c64d368997230ad | [] | [] | https://huggingface.co/datasets/kejian/codeparrot-valid-more-filtering-debug/resolve/main/README.md | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: copies
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
- name: hash
dtype: int64
- name: line_mean
dtype: float64
- name: line_max
dtype: int64
- name: alpha_frac
dtype: float64
- name: autogenerated
dtype: bool
- name: ratio
dtype: float64
- name: config_test
dtype: bool
- name: has_no_keywords
dtype: bool
- name: few_assignments
dtype: bool
splits:
- name: train
num_bytes: 957026
num_examples: 100
download_size: 357047
dataset_size: 957026
---
# Dataset Card for "codeparrot-valid-more-filtering-debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
omr-saeed | null | null | null | false | null | false | omr-saeed/embeddings.csv | 2022-10-26T21:26:44.000Z | null | false | 3701b1a2657cea5fa791c4f52f79d463825cc386 | [] | [
"license:other"
] | https://huggingface.co/datasets/omr-saeed/embeddings.csv/resolve/main/README.md | ---
license: other
---
|
Twitter | null | null | null | false | null | false | Twitter/TwitterFaveGraph | 2022-10-31T23:58:49.000Z | null | false | 7cdae06c98ca54f8892daf6a80efb4a9d8a2abd0 | [] | [
"arxiv:2210.16271",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Twitter/TwitterFaveGraph/resolve/main/README.md | ---
license: cc-by-4.0
---
# MiCRO: Multi-interest Candidate Retrieval Online
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2210.16271)
This repo contains the TwitterFaveGraph dataset from our paper [MiCRO: Multi-interest Candidate Retrieval Online](). <br />
[[PDF]](https://arxiv.org/pdf/2210.16271.pdf)
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## TwitterFaveGraph
TwitterFaveGraph is a bipartite directed graph of user nodes to Tweet nodes where an edge represents a "fave" engagement. Each edge is binned into predetermined time chunks which are assigned as ordinals. These ordinals are contiguous and respect time ordering. In total TwitterFaveGraph has 6.7M user nodes, 13M Tweet nodes, and 283M edges. The maximum degree for users is 100 and the minimum degree for users is 1. The maximum
degree for Tweets is 280k and the minimum degree for Tweets is 5.
The data format is displayed below.
| user_index | tweet_index | time_chunk |
| ------------- | ------------- | ---- |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 3 | 3 | 2 |
## Citation
If you use TwitterFaveGraph in your work, please cite the following:
```bib
@article{portman2022micro,
title={MiCRO: Multi-interest Candidate Retrieval Online},
author={Portman, Frank and Ragain, Stephen and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2210.16271},
year={2022}
}
``` |
Twitter | null | null | null | false | null | false | Twitter/TwitterFollowGraph | 2022-10-31T23:55:05.000Z | null | false | 018b0006db780c8e80c37ec87fe27ed2798ab8a8 | [] | [
"arxiv:2205.06205",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Twitter/TwitterFollowGraph/resolve/main/README.md | ---
license: cc-by-4.0
---
# kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval
[](http://makeapullrequest.com)
[](https://arxiv.org/pdf/2205.06205.pdf)
This repo contains the TwitterFaveGraph dataset from our paper [kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval](https://arxiv.org/pdf/2205.06205.pdf). <br />
[[PDF]]()
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## TwitterFollowGraph
TwitterFollowGraph is a bipartite directed graph of users (consumer) nodes to author (producer) nodes where an edge represents a user "following" an author engagement. Each edge is binned into predetermined time chunks which are denoted with ordinals. These ordinals are contiguous and respect time ordering of engagements. In total TwitterFollowGraph has 261𝑀 edges and 15.5𝑀 vertices, with a max-degree of 900𝐾 and a min-degree of 5.
The data format is displayed below.
| user_index | author_index | time_chunk |
| ------------- | ------------- | ---- |
| 1 | 2 | 1 |
| 2 | 1 | 2 |
| 3 | 3 | 2 |
## Citation
If you use TwitterFollowGraph in your work, please cite the following:
```bib
@article{el2022knn,
title={kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval},
author={El-Kishky, Ahmed and Markovich, Thomas and Leung, Kenny and Portman, Frank and Haghighi, Aria and Xiao, Ying},
journal={arXiv preprint arXiv:2205.06205},
year={2022}
}
``` |
tramzel | null | null | null | false | 47 | false | tramzel/myfooddata_1_4 | 2022-10-27T01:16:03.000Z | null | false | 36c12245c6c6983ca87449763a19a161a62944c9 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/tramzel/myfooddata_1_4/resolve/main/README.md | ---
license: unknown
---
|
bartelds | null | null | null | false | 34 | false | bartelds/cgn-9 | 2022-10-27T02:05:06.000Z | null | false | bb97c7944b1d62065c1ce397aea9339addaeb67b | [] | [] | https://huggingface.co/datasets/bartelds/cgn-9/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 945738292.36
num_examples: 1901
- name: valid
num_bytes: 116163558.0
num_examples: 216
download_size: 1033655907
dataset_size: 1061901850.36
---
# 9 hours of Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
- 8 hours of training data and 1 hour of validation data
- Each set is gender balanced and there is no speaker overlap between both data sets
- Individual recordings are between 2 and 20 seconds of length |
bishalbaaniya | null | null | null | false | 31 | false | bishalbaaniya/myaamia_english | 2022-10-27T01:54:46.000Z | null | false | 5b1dd4215db57c070673a560981545a3310ed9ee | [] | [] | https://huggingface.co/datasets/bishalbaaniya/myaamia_english/resolve/main/README.md | #Overview
This is a dataset I am using for my thesis project Myaamia Translator.
<p style="color: darkred">This is not meant to be used for production yet</p>
<i>I just want to try out a few things.</i> |
grullborg | null | null | null | false | 5 | false | grullborg/league_style | 2022-10-27T02:27:20.000Z | null | false | 98c3bf49ac85d8b9fd593a22a414322cbd9ecb36 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/grullborg/league_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# League Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by league_style-1000-[number of steps for the version you chose]"```
For example, if you chose the 11.5k steps ver, it would be ```"art by league_style-1000-11500"```
If it is to strong just add [] around it.
The general ver I recommend is 11.5k steps, however I added a 4k steps and 12k steps trained ver in the files as well. 4k steps tends towards making nice glasses, and 12k steps seems to be better at poses rather than closeups.
If you'd like to support the amazing artists whose artwork contributed to this embedding's training, I'd highly recommend you check out [Alex Flores](https://www.artstation.com/alexflores), [Chengwei Pan](https://www.artstation.com/pan), [Horace Hsu](https://www.artstation.com/hozure), [Jem Flores](https://www.artstation.com/jemflores), [SIXMOREVODKA STUDIO](https://www.artstation.com/sixmorevodka), and [West Studio](https://www.artstation.com/weststudio).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/CP3dcox.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3uJpYO9.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3mi25aA.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
filevich | null | null | null | false | null | false | filevich/T1K22 | 2022-10-27T02:26:37.000Z | null | false | ee80b1cecccac1a9697b375fedd0c5d70e06f268 | [] | [] | https://huggingface.co/datasets/filevich/T1K22/resolve/main/README.md |  |
grullborg | null | null | null | false | null | false | grullborg/slyvanie_style | 2022-10-27T03:42:32.000Z | null | false | 8e8a05ab1ad3005e3a2f0242377d15b0aa4fada0 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/grullborg/slyvanie_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Slyvanie Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by slyvanie_style"```
If it is to strong just add [] around it.
This embedding was trained to 14500 steps.
If you'd like to support the amazing artist whose artwork contributed to this embedding's training, I'd highly recommend you check out slyvanie [here](https://www.deviantart.com/slyvanie), [here](https://www.artstation.com/slyvanie) and [here](https://slyvanie.weebly.com/).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/0PaBO0M.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/XpdAIdo.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3TuxD9L.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/jsYluEQ.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/H9XScnZ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
xixixi | null | null | null | false | null | false | xixixi/test_db_sd | 2022-10-27T06:06:11.000Z | null | false | 78114523e12985450506aab2fddc6d4d26889057 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/xixixi/test_db_sd/resolve/main/README.md | ---
license: openrail
---
|
tglcourse | null | null | null | false | 33 | false | tglcourse/5s_birdcall_samples_top20 | 2022-10-27T07:34:37.000Z | null | false | d52a3cb0779c7f33f85566d48737fa380d206769 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/tglcourse/5s_birdcall_samples_top20/resolve/main/README.md | ---
license:
- unknown
pretty_name: 5s Birdcall Samples
---
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: https://www.kaggle.com/code/johnowhitaker/peak-identification
The audio is represented at 32kHz (mono) |
quincyqiang | null | null | null | false | null | false | quincyqiang/test | 2022-10-27T08:17:23.000Z | glue | false | 1904eb1374e46b71e86ae1940dbe01678df6c3c6 | [] | [
"doi:10.57967/hf/0065",
"annotations_creators:other",
"language_creators:other",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"configs:ax",
"configs:cola",
"configs:mnli",
"configs:mnli_matched",
"configs:mnli_mismatched",
"configs:mrpc",
"configs:qnli",
"configs:qqp",
"configs:rte",
"configs:sst2",
"configs:stsb",
"configs:wnli",
"tags:qa-nli",
"tags:coreference-nli",
"tags:paraphrase-identification"
] | https://huggingface.co/datasets/quincyqiang/test/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
Madge | null | null | null | false | null | false | Madge/test1 | 2022-10-27T08:21:56.000Z | null | false | d2dda6275beb2a5b8bd27d17ea0cb2548f3782fe | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Madge/test1/resolve/main/README.md | ---
license: openrail
---
|
quincyqiang | null | null | null | false | null | false | quincyqiang/test2 | 2022-10-27T08:19:47.000Z | null | false | c975e4aa6efd560a1df5b0462ed88d60a55ec30b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/quincyqiang/test2/resolve/main/README.md | ---
license: apache-2.0
---
|
MNNEN | null | null | null | false | null | false | MNNEN/face_train_test | 2022-10-27T09:24:42.000Z | null | false | 04d26f02a36a50efc862ed42e30af337c03c4c29 | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/MNNEN/face_train_test/resolve/main/README.md | ---
license: cc0-1.0
---
|
merve | null | null | null | false | null | false | merve/tabular_benchmark | 2022-10-27T10:26:45.000Z | null | false | 2aeec831e49b923d71b4f98ee2629ef659766959 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/merve/tabular_benchmark/resolve/main/README.md | ---
license: apache-2.0
---
|
biglam | null | null | null | false | null | false | biglam/v4design_europeana_style_dataset | 2022-10-27T11:14:30.000Z | null | false | 0dbbdb7bc4eda0c61bcbc73049e8aa39ef30913b | [] | [
"annotations_creators:expert-generated",
"license:other",
"task_categories:image-classification"
] | https://huggingface.co/datasets/biglam/v4design_europeana_style_dataset/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: V4Design Europeana style dataset
size_categories: []
source_datasets: []
tags: []
task_categories:
- image-classification
task_ids: []
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: uri
dtype: string
- name: style
dtype:
class_label:
names:
0: Rococo
1: Baroque
2: Other
- name: rights
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 536168550.923
num_examples: 1613
download_size: 535393230
dataset_size: 536168550.923
---
# Dataset Card for V4Design Europeana style dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations [source](https://zenodo.org/record/4896487)
This version of the dataset is generated using the [CSV file](https://zenodo.org/record/4896487) hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.
```python
ds = ds.filter(lambda x: x['image'] is not None)
```
### Supported Tasks and Leaderboards
This dataset is primarily intended for `image-classification`.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@dataset{europeana_2021_4896487,
author = {Europeana and
V4Design},
title = {V4Design/Europeana style dataset},
month = jun,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.4896487},
url = {https://doi.org/10.5281/zenodo.4896487}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
|
polinaeterna | null | null | null | false | 7 | false | polinaeterna/audios | 2022-11-03T12:47:07.000Z | null | false | c4046158a56bfb31a1d03ab48d2b9b340bc2925f | [] | [] | https://huggingface.co/datasets/polinaeterna/audios/resolve/main/README.md | ---
dataset_info:
- config_name: default
drop_labels: true
--- |
biglam | null | @dataset{seuret_mathias_2019_3366686,
author = {Seuret, Mathias and
Limbach, Saskia and
Weichselbaumer, Nikolaus and
Maier, Andreas and
Christlein, Vincent},
title = {{Dataset of Pages from Early Printed Books with
Multiple Font Groups}},
month = aug,
year = 2019,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3366686},
url = {https://doi.org/10.5281/zenodo.3366686}
} | This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastarda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura. | false | 2 | false | biglam/early_printed_books_font_detection | 2022-10-28T15:39:50.000Z | null | false | 5b62ab4c6ef313d063a3c4da33cb14bb2fe94dc9 | [] | [
"annotations_creators:expert-generated",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"task_categories:image-classification",
"task_ids:multi-label-image-classification"
] | https://huggingface.co/datasets/biglam/early_printed_books_font_detection/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: labels
sequence:
class_label:
names:
0: greek
1: antiqua
2: other_font
3: not_a_font
4: italic
5: rotunda
6: textura
7: fraktur
8: schwabacher
9: hebrew
10: bastarda
11: gotico_antiqua
splits:
- name: test
num_bytes: 2345451
num_examples: 10757
- name: train
num_bytes: 5430875
num_examples: 24866
download_size: 44212934313
dataset_size: 7776326
annotations_creators:
- expert-generated
language: []
language_creators: []
license:
- cc-by-nc-sa-4.0
multilinguality: []
pretty_name: Early Printed Books Font Detection Dataset
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for Early Printed Books Font Detection Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**https://doi.org/10.5281/zenodo.3366686
- **Paper:**: https://doi.org/10.1145/3352631.3352640
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.
[More Information Needed]
### Supported Tasks and Leaderboards
The primary use case for this datasets is
- `multi-label-image-classification`: This dataset can be used to train a model for multi label image classification where each image can have one, or more labels.
- `image-classification`: This dataset could also be adapted to only predict a single label for each image
### Languages
The dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.
[More Information Needed]
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3072x3840 at 0x7F6AC192D850>,
'labels': [5]}
```
### Data Fields
This dataset contains two fields:
- `image`: the image of the book page
- `labels`: one or more labels for the font used in the book page depicted in the `image`
### Data Splits
The dataset is broken into a train and test split with the following breakdown of number of examples:
- train: 24,866
- test: 10,757
## Dataset Creation
### Curation Rationale
The dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:
>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.
### Source Data
#### Initial Data Collection and Normalization
The images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
ellabettison | null | null | null | false | 4 | false | ellabettison/processed_finbert_dataset_padded_med | 2022-10-27T12:23:57.000Z | null | false | b95cda0a3bc2b1378f8992ea2556d2ab76fb63f5 | [] | [] | https://huggingface.co/datasets/ellabettison/processed_finbert_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 12801600.0
num_examples: 100000
- name: train
num_bytes: 115214400.0
num_examples: 900000
download_size: 17502018
dataset_size: 128016000.0
---
# Dataset Card for "processed_finbert_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
inria-soda | null | null | null | false | 9 | false | inria-soda/tabular-benchmark | 2022-10-27T15:19:14.000Z | null | false | fb4575853772c62a20203bdd6cc0202f5db4ce4e | [] | [
"task_categories:tabular-classification",
"task_categories:tabular-regression"
] | https://huggingface.co/datasets/inria-soda/tabular-benchmark/resolve/main/README.md |
---
annotations_creators: []
license: []
pretty_name: tabular_benchmark
tags: []
task_categories:
- tabular-classification
- tabular-regression
dataset_info:
- config_name: reg_cat
splits:
- reg_cat/*
- config_name: reg_num
splits:
- reg_num/*
- config_name: clf_cat
splits:
- clf_cat/*
- config_name: clf_num
splits:
- clf_num/*
---
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria_soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a default
Logistic Regression (or Linear Regression for regression) reach a score whose relative difference
with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn) is below 5%. Other benchmarks use different metrics to
remove too easy datasets, like removing datasets which can be learnt perfectly by a single decision
classifier [Bischl et al., 2021], but this does not account for different Bayes rate of different datasets.
As tree-based methods have been shown to be superior to Logistic Regression [Fernández-Delgado
et al., 2014] in our setting, a close score for these two types of models indicates that we might
already be close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name| n_samples| n_features| original_link| new_link|
|----|----|----|----|----|
|credit| 16714| 10 |https://openml.org/d/151 |https://www.openml.org/d/44089|
|california |20634 |8 |https://openml.org/d/293 |https://www.openml.org/d/44090|
|wine |2554 |11 |https://openml.org/d/722 |https://www.openml.org/d/44091|
|electricity| 38474 |7| https://openml.org/d/821 |https://www.openml.org/d/44120|
|covertype |566602 |10 |https://openml.org/d/993| https://www.openml.org/d/44121|
|pol |10082 |26 |https://openml.org/d/1120 |https://www.openml.org/d/44122|
|house_16H |13488| 16 |https://openml.org/d/1461| https://www.openml.org/d/44123|
|kdd_ipums_la_97-small| 5188 |20 |https://openml.org/d/1489 |https://www.openml.org/d/44124|
|MagicTelescope| 13376| 10| https://openml.org/d/41150 |https://www.openml.org/d/44125|
|bank-marketing |10578 |7 |https://openml.org/d/42769| https://www.openml.org/d/44126|
|phoneme |3172| 5 |https://openml.org/d/1044| https://www.openml.org/d/44127|
|MiniBooNE| 72998| 50 |https://openml.org/d/41168 |https://www.openml.org/d/44128|
|Higgs| 940160 |24| https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv |https://www.openml.org/d/44129|
|eye_movements| 7608 |20 |https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html |https://www.openml.org/d/44130|
|jannis |57580 |54 |https://archive.ics.uci.edu/ml/datasets/wine+quality |https://www.openml.org/d/44131|
**Categorical Classification**
|dataset_name |n_samples| n_features |original_link |new_link|
|----|----|----|----|----|
|electricity |38474| 8 |https://openml.org/d/151| https://www.openml.org/d/44156|
|eye_movements |7608 |23| https://openml.org/d/1044 |https://www.openml.org/d/44157|
|covertype |423680| 54| https://openml.org/d/1114 |https://www.openml.org/d/44159|
|rl |4970 |12 |https://openml.org/d/1596 |https://www.openml.org/d/44160|
|road-safety| 111762 |32 |https://openml.org/d/41160 |https://www.openml.org/d/44161|
|compass |16644 |17 |https://openml.org/d/42803 |https://www.openml.org/d/44162|
|KDDCup09_upselling |5128 |49 |https://www.kaggle.com/datasets/danofer/compass?select=cox-violent-parsed.csv |https://www.openml.org/d/44186|
**Numerical Regression**
|dataset_name| n_samples| n_features| original_link| new_link|
|----|----|----|----|----|
|cpu_act |8192 |21| https://openml.org/d/197 |https://www.openml.org/d/44132|
|pol | 15000| 26 |https://openml.org/d/201| https://www.openml.org/d/44133|
|elevators |16599 |16 |https://openml.org/d/216| https://www.openml.org/d/44134|
|isolet |7797| 613| https://openml.org/d/300| https://www.openml.org/d/44135|
|wine_quality |6497 |11| https://openml.org/d/287 | https://www.openml.org/d/44136|
|Ailerons |13750 |33| https://openml.org/d/296 | https://www.openml.org/d/44137|
|houses |20640| 8| https://openml.org/d/537 | https://www.openml.org/d/44138|
|house_16H |22784| 16 |https://openml.org/d/574 | https://www.openml.org/d/44139|
|diamonds |53940| 6| https://openml.org/d/42225 | https://www.openml.org/d/44140|
|Brazilian_houses |10692| 8 |https://openml.org/d/42688 | https://www.openml.org/d/44141|
|Bike_Sharing_Demand| 17379| 6| https://openml.org/d/42712 | https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016 |581835| 9| https://openml.org/d/42729 | https://www.openml.org/d/44143|
|house_sales |21613 |15 | https://openml.org/d/42731| https://www.openml.org/d/44144|
|sulfur |10081| 6 | https://openml.org/d/23515 | https://www.openml.org/d/44145|
|medical_charges | 163065 |3 | https://openml.org/d/42720 | https://www.openml.org/d/44146|
|MiamiHousing2016 |13932| 13 |https://openml.org/d/43093 | https://www.openml.org/d/44147|
|superconduct |21263 |79| https://openml.org/d/43174 | https://www.openml.org/d/44148|
|california |20640| 8 |https://www.dcc.fc.up.pt/ ltorgo/Regression/cal_housing.html |https://www.openml.org/d/44025|
|fifa |18063 |5 |https://www.kaggle.com/datasets/stefanoleone992/fifa-22-complete-player-dataset| https://www.openml.org/d/44026|
|year |515345 |90 |https://archive.ics.uci.edu/ml/datasets/yearpredictionmsd| https://www.openml.org/d/44027|
**Categorical Regression**
|dataset_name| n_samples| n_features| original_link| new_link|
|----|----|----|----|----|
|yprop_4_1 |8885 |62 |https://openml.org/d/416 |https://www.openml.org/d/44054|
|analcatdata_supreme |4052| 7 |https://openml.org/d/504 |https://www.openml.org/d/44055|
|visualizing_soil |8641| 4 |https://openml.org/d/688 |https://www.openml.org/d/44056|
|black_friday |166821| 9 |https://openml.org/d/41540| https://www.openml.org/d/44057|
|diamonds | 53940| 9| https://openml.org/d/42225| https://www.openml.org/d/44059|
|Mercedes_Benz_Greener_Manufacturing |4209 |359| https://openml.org/d/42570 |https://www.openml.org/d/44061|
|Brazilian_houses| 10692| 11 |https://openml.org/d/42688 |https://www.openml.org/d/44062|
|Bike_Sharing_Demand| 17379| 11 |https://openml.org/d/42712 |https://www.openml.org/d/44063|
|OnlineNewsPopularity |39644| 59| https://openml.org/d/42724| https://www.openml.org/d/44064|
|nyc-taxi-green-dec-2016| 581835 |16 |https://openml.org/d/42729|https://www.openml.org/d/44065|
|house_sales | 21613| 17| https://openml.org/d/42731| https://www.openml.org/d/44066|
|particulate-matter-ukair-2017 |394299 |6| https://openml.org/d/42207| https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance | 241600| 9 |https://openml.org/d/43144| https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f |
ellabettison | null | null | null | false | 26 | false | ellabettison/processed_luke_dataset_padded_med | 2022-10-27T13:32:26.000Z | null | false | 21a408b4e3cb930707da154431ac1d92b92b5c55 | [] | [] | https://huggingface.co/datasets/ellabettison/processed_luke_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: special_tokens_mask
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: test
num_bytes: 10801200.0
num_examples: 100000
- name: train
num_bytes: 97210800.0
num_examples: 900000
download_size: 0
dataset_size: 108012000.0
---
# Dataset Card for "processed_luke_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
woctordho | null | null | null | false | null | false | woctordho/autotrain-data-lojban-translation | 2022-10-27T13:52:53.000Z | null | false | 574256770ce19c1b52cec6cce0a88a6bb713a1ae | [] | [
"language:en",
"language:jb",
"task_categories:translation"
] | https://huggingface.co/datasets/woctordho/autotrain-data-lojban-translation/resolve/main/README.md | ---
language:
- en
- jb
task_categories:
- translation
---
# AutoTrain Dataset for project: lojban-translation
## Dataset Description
This dataset has been automatically processed by AutoTrain for project lojban-translation.
### Languages
The BCP-47 code for the dataset's language is en2jb.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "I read the poem for my child.",
"target": "mi tcidu lo pemci te cu'u le panzi be mi"
},
{
"source": "Jim is learning how to drive a car.",
"target": "la jim cilre fi lo nu klasazri lo karce"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 |
|
ARTemAI | null | null | null | false | null | false | ARTemAI/hands | 2022-10-27T13:45:00.000Z | null | false | 7954876b4f617796157e6441b69128f228eabecc | [] | [
"license:openrail"
] | https://huggingface.co/datasets/ARTemAI/hands/resolve/main/README.md | ---
license: openrail
---
|
ellabettison | null | null | null | false | 8 | false | ellabettison/processed_spanbert_dataset_padded_med | 2022-10-27T14:18:35.000Z | null | false | 842c87bc393eb1f033026258be81f484200a08af | [] | [] | https://huggingface.co/datasets/ellabettison/processed_spanbert_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 12801600.0
num_examples: 100000
- name: train
num_bytes: 115214400.0
num_examples: 900000
download_size: 17707833
dataset_size: 128016000.0
---
# Dataset Card for "processed_spanbert_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pere | null | null | null | false | null | false | pere/sami_parallel | 2022-11-01T09:02:52.000Z | null | false | 381bc18db2d393aa18eeab8f92e0c135aa76ee1b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/pere/sami_parallel/resolve/main/README.md | ---
license: apache-2.0
---
|
mgb-dx-meetup | null | null | null | false | 12 | false | mgb-dx-meetup/product-reviews | 2022-10-27T15:25:55.000Z | null | false | cb7f336db3519b9ce33ca2dcd11cf0e306f56dea | [] | [] | https://huggingface.co/datasets/mgb-dx-meetup/product-reviews/resolve/main/README.md | ---
dataset_info:
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: test
num_bytes: 454952.85
num_examples: 1500
- name: train
num_bytes: 6073361.466666667
num_examples: 20000
download_size: 4034850
dataset_size: 6528314.316666666
---
# Dataset Card for Product Reviews
Customer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.
## Licensing information
This datasets is based on the [`amazon_reviews_multi`](https://huggingface.co/datasets/amazon_reviews_multi) dataset. |
tbrugger | null | null | null | false | 46 | false | tbrugger/full_french | 2022-11-07T10:32:26.000Z | null | false | 270d1027e347dbfa7d80c71a59eca806e58795cd | [] | [] | https://huggingface.co/datasets/tbrugger/full_french/resolve/main/README.md | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
0: O
1: B-Sentence
2: I-Sentence
splits:
- name: test
num_bytes: 1049098.9727705922
num_examples: 441
- name: train
num_bytes: 8385655.054458816
num_examples: 3525
- name: validation
num_bytes: 1049098.9727705922
num_examples: 441
download_size: 1468722
dataset_size: 10483853.0
---
# Dataset Card for "full_french"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nerfgun3 | null | null | null | false | 1 | false | Nerfgun3/ao_style | 2022-10-29T11:16:29.000Z | null | false | f4f954f99f54f4a8261f1ab7b28469550c4bceeb | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/ao_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Ao Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by ao_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ec8MaO4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/N4IRulK.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/22alJny.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ZPPIs9L.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/XQZvjGs.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/mikeou_art | 2022-10-29T11:18:34.000Z | null | false | 7f557c5d4da73b73ea90c3e0ab9663484f25b992 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/mikeou_art/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Mikeou Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by mikeou_art"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Anc83EO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NukXbXO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/LcamHiI.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sHL81zL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/vrfu8WV.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
chloeliu | null | null | null | false | null | false | chloeliu/reddit_nosleep_posts | 2022-10-27T15:34:53.000Z | null | false | 49ebe79789fbdca8a8cef155ce3a78dc2475a69e | [] | [
"license:unknown"
] | https://huggingface.co/datasets/chloeliu/reddit_nosleep_posts/resolve/main/README.md | ---
license: unknown
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-ARTeLab__ilpost-ARTeLab__ilpost-d2ea00-1904764775 | 2022-10-27T15:44:41.000Z | null | false | 9961aeb4e5e069a1760792883bbb4df34eb03fad | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ARTeLab/ilpost"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-ARTeLab__ilpost-ARTeLab__ilpost-d2ea00-1904764775/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ARTeLab/ilpost
eval_info:
task: summarization
model: ARTeLab/it5-summarization-ilpost
metrics: ['bertscore']
dataset_name: ARTeLab/ilpost
dataset_config: ARTeLab--ilpost
dataset_split: test
col_mapping:
text: source
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: ARTeLab/ilpost
* Config: ARTeLab--ilpost
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-ARTeLab__fanpage-ARTeLab__fanpage-6c7fce-1904864776 | 2022-10-27T15:47:53.000Z | null | false | 8ab5d278ab48d4d9943fca87fbaf33774faf65e8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ARTeLab/fanpage"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-ARTeLab__fanpage-ARTeLab__fanpage-6c7fce-1904864776/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ARTeLab/fanpage
eval_info:
task: summarization
model: ARTeLab/it5-summarization-fanpage
metrics: ['bertscore']
dataset_name: ARTeLab/fanpage
dataset_config: ARTeLab--fanpage
dataset_split: test
col_mapping:
text: source
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: ARTeLab/fanpage
* Config: ARTeLab--fanpage
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-ARTeLab__mlsum-it-ARTeLab__mlsum-it-b0baa7-1904964782 | 2022-10-27T15:55:45.000Z | null | false | 4da865e1b2019c88a45f920e7c8896be5c86033d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ARTeLab/mlsum-it"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-ARTeLab__mlsum-it-ARTeLab__mlsum-it-b0baa7-1904964782/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ARTeLab/mlsum-it
eval_info:
task: summarization
model: ARTeLab/it5-summarization-mlsum
metrics: ['bertscore']
dataset_name: ARTeLab/mlsum-it
dataset_config: ARTeLab--mlsum-it
dataset_split: test
col_mapping:
text: source
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-mlsum
* Dataset: ARTeLab/mlsum-it
* Config: ARTeLab--mlsum-it
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
ashraq | null | null | null | false | 10 | false | ashraq/hotel-reviews | 2022-10-27T17:24:29.000Z | null | false | 8e4d20db185e50b3a66dcaa7f87468a48efedd55 | [] | [] | https://huggingface.co/datasets/ashraq/hotel-reviews/resolve/main/README.md | ---
dataset_info:
features:
- name: review_date
dtype: string
- name: hotel_name
dtype: string
- name: review
dtype: string
splits:
- name: train
num_bytes: 15043294
num_examples: 93757
download_size: 6100544
dataset_size: 15043294
---
# Dataset Card for "hotel-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/jiashenliu/515k-hotel-reviews-data-in-europe) |
lambdalabs | null | null | null | false | 18 | false | lambdalabs/naruto-blip-captions | 2022-10-27T21:17:06.000Z | null | false | 1ed13e8ef280bd45e3bbac4cfa8bbd9d64ec9f89 | [] | [] | https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/resolve/main/README.md | # Dataset Card for Naruto BLIP captions
_Dataset used to train [TBD](TBD)._
The original images were obtained from [narutopedia.com](https://naruto.fandom.com/wiki/Narutopedia) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs

> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
## Citation
If you use this dataset, please cite it as:
```
@misc{cervenka2022naruto2,
author = {Cervenka, Eole},
title = {Naruto BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/}}
}
``` |
hasanriaz121 | null | null | null | false | 5 | false | hasanriaz121/reqs | 2022-10-27T18:06:50.000Z | null | false | 29d8c48af080c04fc9e645d72cae49b38866026c | [] | [] | https://huggingface.co/datasets/hasanriaz121/reqs/resolve/main/README.md | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: requirement_txt
dtype: string
- name: EF
dtype: int64
- name: PE
dtype: int64
- name: PO
dtype: int64
- name: RE
dtype: int64
- name: SE
dtype: int64
- name: US
dtype: int64
- name: X
dtype: int64
splits:
- name: test
num_bytes: 53980
num_examples: 285
- name: train
num_bytes: 431941
num_examples: 2308
- name: validation
num_bytes: 49251
num_examples: 257
download_size: 218916
dataset_size: 535172
---
# Dataset Card for "reqs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmazonScience | null | @inproceedings{sen-etal-2022-mintaka,
title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
author = "Sen, Priyanka and Aji, Alham Fikri and Saffari, Amir",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.138",
pages = "1604--1619"
} | Mintaka is a complex, natural, and multilingual dataset designed for experimenting with end-to-end
question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English,
annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian,
Japanese, Portuguese, and Spanish for a total of 180,000 samples.
Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions,
which were naturally elicited from crowd workers. | false | 38 | false | AmazonScience/mintaka | 2022-10-28T10:55:50.000Z | mintaka | false | 4788cd2a26eae8a1e6534d87b1bfbad82c3a9dc2 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:ar",
"multilinguality:de",
"multilinguality:ja",
"multilinguality:hi",
"multilinguality:pt",
"multilinguality:en",
"multilinguality:es",
"multilinguality:it",
"multilinguality:fr",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:open-domain-qa",
"language_bcp47:ar-SA",
"language_bcp47:de-DE",
"language_bcp47:ja-JP",
"language_bcp47:hi-HI",
"language_bcp47:pt-PT",
"language_bcp47:en-EN",
"language_bcp47:es-ES",
"language_bcp47:it-IT",
"language_bcp47:fr-FR"
] | https://huggingface.co/datasets/AmazonScience/mintaka/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- ar
- de
- ja
- hi
- pt
- en
- es
- it
- fr
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: mintaka
pretty_name: Mintaka
language_bcp47:
- ar-SA
- de-DE
- ja-JP
- hi-HI
- pt-PT
- en-EN
- es-ES
- it-IT
- fr-FR
---
# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/amazon-science/mintaka
- **Repository:** https://github.com/amazon-science/mintaka
- **Paper:** https://aclanthology.org/2022.coling-1.138/
- **Point of Contact:** [GitHub](https://github.com/amazon-science/mintaka)
### Dataset Summary
Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: https://aclanthology.org/2022.coling-1.138/
To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:
- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)
- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)
- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)
- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)
- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)
- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)
- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)
- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)
- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)
- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History
Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for question answering.
To ensure comparability, please refer to our evaluation script here: https://github.com/amazon-science/mintaka#evaluation
### Languages
All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"id": "a9011ddf",
"lang": "en",
"question": "What is the seventh tallest mountain in North America?",
"answerText": "Mount Lucania",
"category": "geography",
"complexityType": "ordinal",
"questionEntity":
[
{
"name": "Q49",
"entityType": "entity",
"label": "North America",
"mention": "North America",
"span": [40, 53]
},
{
"name": 7,
"entityType": "ordinal",
"mention": "seventh",
"span": [12, 19]
}
],
"answerEntity":
[
{
"name": "Q1153188",
"label": "Mount Lucania",
}
],
}
```
### Data Fields
The data fields are the same among all splits.
`id`: a unique ID for the given sample.
`lang`: the language of the question.
`question`: the original question elicited in the corresponding language.
`answerText`: the original answer text elicited in English.
`category`: the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports
`complexityType`: the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic
`questionEntity`: a list of annotated question entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"entityType": The type of the entity. Options are:
entity, cardinal, ordinal, date, time, percent, quantity, or money
"label": The label of the Wikidata Q-code
"mention": The entity as it appears in the English question text. Will be empty for non-English samples.
"span": The start and end characters of the mention in the English question text. Will be empty for non-English samples.
}
```
`answerEntity`: a list of annotated answer entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"label": The label of the Wikidata Q-code
}
```
### Data Splits
For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
Amazon Alexa AI.
### Licensing Information
This project is licensed under the CC-BY-4.0 License.
### Citation Information
Please cite the following papers when using this dataset.
```latex
@inproceedings{sen-etal-2022-mintaka,
title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
author = "Sen, Priyanka and
Aji, Alham Fikri and
Saffari, Amir",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.138",
pages = "1604--1619"
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset. |
hoodhahmed | null | null | null | false | null | false | hoodhahmed/dhivehi_corpus | 2022-10-27T19:00:36.000Z | null | false | 255f251fd722711e93bdb4df90ad4797715331dc | [] | [
"license:openrail"
] | https://huggingface.co/datasets/hoodhahmed/dhivehi_corpus/resolve/main/README.md | ---
license: openrail
---
|
memray | null | null | null | false | 2 | false | memray/keyphrase | 2022-10-29T06:18:55.000Z | null | false | 3d703f89b39dbd62d406e5863b32ea9afb4dc8a5 | [] | [] | https://huggingface.co/datasets/memray/keyphrase/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
biglam | null | null | null | false | 1 | false | biglam/early_printed_books_font_detection_loaded | 2022-10-28T08:47:45.000Z | null | false | 61b99919bdf522fee905ba7f3e3e8b67e58e80e5 | [] | [] | https://huggingface.co/datasets/biglam/early_printed_books_font_detection_loaded/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: labels
sequence:
class_label:
names:
0: greek
1: antiqua
2: other_font
3: not_a_font
4: italic
5: rotunda
6: textura
7: fraktur
8: schwabacher
9: hebrew
10: bastarda
11: gotico_antiqua
splits:
- name: test
num_bytes: 11398084794.636
num_examples: 10757
- name: train
num_bytes: 21512059165.866
num_examples: 24866
download_size: 44713803337
dataset_size: 32910143960.502
---
# Dataset Card for "early_printed_books_font_detection_loaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zZWipeoutZz | null | null | null | false | null | false | zZWipeoutZz/skeleton_slime | 2022-10-28T09:48:03.000Z | null | false | d46098f2cd8b030fe0d6c9e5fe32e0e47aaad681 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/zZWipeoutZz/skeleton_slime/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by skeleton slime </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by skeleton slime- 6500</em></li>
<li>10,000 steps <em>Usage: art by skeleton slime-10000</em> </li>
<li>15,000 steps <em>Usage: art by skeleton slime</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ATm5o4H.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/DpdwiyC.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/qwGmnel.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/SF3kfd4.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/SF3kfd4.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> |
adamlouly | null | null | null | false | 1,368 | false | adamlouly/enron_spam_data | 2022-10-27T23:11:14.000Z | null | false | 099c1b164c1ef9ff0e7986bfb8f1b33d3ff8596a | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/adamlouly/enron_spam_data/resolve/main/README.md | ---
license: apache-2.0
---
|
AbderrahmanSkiredj1 | null | null | null | false | 11 | false | AbderrahmanSkiredj1/Tashkeel_MLM | 2022-10-27T21:59:56.000Z | null | false | 32968ad39fc5ce003e3d23c2cc12e3c195adf271 | [] | [] | https://huggingface.co/datasets/AbderrahmanSkiredj1/Tashkeel_MLM/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 47129588
num_examples: 50000
- name: validation
num_bytes: 2298704
num_examples: 2500
download_size: 19497987
dataset_size: 49428292
---
# Dataset Card for "Tashkeel_MLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mostafa3zazi | null | null | null | false | 13 | false | Mostafa3zazi/tydiqa_secondary_task | 2022-10-27T22:52:30.000Z | null | false | ff3d266876d88b216558abbb04575e2efe7a252b | [] | [] | https://huggingface.co/datasets/Mostafa3zazi/tydiqa_secondary_task/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 52948607
num_examples: 49881
- name: validation
num_bytes: 5006461
num_examples: 5077
download_size: 29688806
dataset_size: 57955068
---
# Dataset Card for "tydiqa_secondary_task"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vicm0r | null | null | null | false | null | false | vicm0r/eurosat | 2022-10-28T00:17:56.000Z | null | false | f364ba93d5e59758672fdf2ff59b4a505ab3caba | [] | [] | https://huggingface.co/datasets/vicm0r/eurosat/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: AnnualCrop
1: Forest
2: HerbaceousVegetation
3: Highway
4: Industrial
5: Pasture
6: PermanentCrop
7: Residential
8: River
9: SeaLake
splits:
- name: train
num_bytes: 57259856.0
num_examples: 27000
download_size: 88186968
dataset_size: 57259856.0
---
# Dataset Card for "eurosat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
randomwalksky | null | null | null | false | null | false | randomwalksky/shoes20 | 2022-10-28T01:32:51.000Z | null | false | 99a8f2eb0f5e0d1f279020eb6260ca52b77875c4 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/randomwalksky/shoes20/resolve/main/README.md | ---
license: openrail
---
|
xixixi | null | null | null | false | null | false | xixixi/images | 2022-10-28T01:41:32.000Z | null | false | 901ddea7290a85838c328f14b6508db11d942970 | [] | [
"license:other"
] | https://huggingface.co/datasets/xixixi/images/resolve/main/README.md | ---
license: other
---
|
TeddyCat | null | null | null | false | 6 | false | TeddyCat/Human_obj_bg | 2022-11-14T14:40:12.000Z | null | false | 3712ba174793d990a889a2894d434013a7214032 | [] | [] | https://huggingface.co/datasets/TeddyCat/Human_obj_bg/resolve/main/README.md | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 350110.0
num_examples: 20
download_size: 337556
dataset_size: 350110.0
---
# Dataset Card for "Human_obj_bg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mohaddeseh | null | null | null | false | null | false | Mohaddeseh/BioNLI | 2022-10-28T03:55:43.000Z | null | false | e6769ca6989c97a283bfd1da72627ce56a003b0d | [] | [
"license:cc"
] | https://huggingface.co/datasets/Mohaddeseh/BioNLI/resolve/main/README.md | ---
license: cc
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164906 | 2022-10-28T04:21:46.000Z | null | false | 443f28582af7d75148a31c76a300efa4b5b0108a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164906/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164909 | 2022-10-28T06:25:07.000Z | null | false | 7f7e1e829257c402b1de674dcae98afac66756de | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164909/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164903 | 2022-10-28T04:08:28.000Z | null | false | 77fee1ab3232c91e763d3505780ec8e6b633e065 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164903/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164908 | 2022-10-28T05:06:39.000Z | null | false | ef0156d81134002a97402df78322bb674e400708 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164908/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164902 | 2022-10-28T04:08:50.000Z | null | false | f130023e49e8c83786974b72fc1852c574028a83 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164902/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
mattymchen | null | null | null | false | 1 | false | mattymchen/synli | 2022-10-28T08:52:16.000Z | null | false | 2acaa832b1e781b8a91915bdbc119828f71b5556 | [] | [
"license:odc-by"
] | https://huggingface.co/datasets/mattymchen/synli/resolve/main/README.md | ---
license: odc-by
dataset_info:
features:
- name: sent0
dtype: string
- name: sent1
dtype: string
- name: hard_neg
dtype: string
splits:
- name: train
num_bytes: 11441750654
num_examples: 60939492
download_size: 6904073153
dataset_size: 11441750654
---
# Dataset Card for SyNLI
A synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.
## Data Fields
The data have several fields:
- `sent0`: premise as a string
- `sent1`: entailment hypothesis as a string
- `hard_neg`: contradiction hypothesis as a string
|
AhmedBou | null | null | null | false | 1 | false | AhmedBou/Tunisian-Dialect-Corpus | 2022-10-28T05:35:27.000Z | null | false | b10a8af1d9f8bcaa6e6b62336b2531fc210a6ea5 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/AhmedBou/Tunisian-Dialect-Corpus/resolve/main/README.md | ---
license: afl-3.0
---
About 20k comments manually annotated to positive and negativ. This corpus is collected from YouTube and Facebook public pages from June 2020 to October 2020.
Corpus Columns:
-text column contains scraped comments:
-label column contains contains the sentiment (pos/neg) of each comment:
the data is scraped from different types of channels and pages related to:
News
Music
Politics
Sport
Tv_series
Tv_shows
Around:
11k of Negative comments
9k of Positive comments |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.