author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063401 | 2022-10-23T21:14:09.000Z | null | false | b8e140cc5b8866a23c246f84785adce295792c8f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063401/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063403 | 2022-10-23T21:45:29.000Z | null | false | 728799a60277cd443045c7d19c40d4191162e20e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063403/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963398 | 2022-10-24T08:46:56.000Z | null | false | 8772c16f195f7f98be77d04eee7b64f965607ffd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963398/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-66b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063404 | 2022-10-23T22:21:29.000Z | null | false | ed6362992ac70b04bf6de9b9707127ed9a81913b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063404/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
rhe-rhf | null | null | null | false | null | false | rhe-rhf/dataset | 2022-10-23T21:00:14.000Z | null | false | 451b95597d5a98802f91f65acc9185402c4456ef | [] | [
"license:openrail"
] | https://huggingface.co/datasets/rhe-rhf/dataset/resolve/main/README.md | ---
license: openrail
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063405 | 2022-10-24T00:35:42.000Z | null | false | 065a794edae01a21ecc4da42eba9271432d2c9de | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063405/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063406 | 2022-10-24T04:31:40.000Z | null | false | 894d51ef8e444360826fef970442b4b6e882ff64 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/neqa2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063406/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-66b_eval
metrics: []
dataset_name: jeffdshen/neqa2_8shot
dataset_config: jeffdshen--neqa2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163407 | 2022-10-23T21:13:41.000Z | null | false | 1acb7b8cd33ab32069f18e4b3bda902ee86cd7b1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163407/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163408 | 2022-10-23T21:17:23.000Z | null | false | c5c85b748f0add69a515584101f75d31a23c3eec | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163408/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163409 | 2022-10-23T21:23:27.000Z | null | false | d1b0e19328570ff6d6b66feb6f1f1d49cc2586a6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163409/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163411 | 2022-10-23T21:54:20.000Z | null | false | 29878dfab55f73640bd769dda9097009ba88cac7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163411/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163410 | 2022-10-23T21:36:03.000Z | null | false | 3d4e995498c994515671fe0ffa35466db46aa819 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163410/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163412 | 2022-10-23T22:27:37.000Z | null | false | 9774214c388611978defa2b05f2cbb6eafc83ef6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163412/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163413 | 2022-10-24T00:15:46.000Z | null | false | 2d685476ba41df49df84ce83869ec97f2c48a09d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163413/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163414 | 2022-10-24T03:25:10.000Z | null | false | 003cddb5c422851a1ed82a771e069487afd0dbe5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math2_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163414/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math2_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-66b_eval
metrics: []
dataset_name: jeffdshen/redefine_math2_8shot
dataset_config: jeffdshen--redefine_math2_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
rufimelo | null | null | null | false | null | false | rufimelo/PortugueseLegalSentences-v0 | 2022-10-24T00:55:55.000Z | null | false | 80806d78f92ead5ac7d7b71e0aad69d63da69144 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:pt",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:original"
] | https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v0/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263415 | 2022-10-23T21:32:52.000Z | null | false | 71a7df4dec587db7ca75e77e17820f934b9239ee | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263415/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263417 | 2022-10-23T21:55:09.000Z | null | false | 8708ce52df013e02ce64fa1d724dd9658fbe0337 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263417/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263416 | 2022-10-23T21:45:45.000Z | null | false | c79968e3486c761ac1dc22e70ef3543566a865d8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263416/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263418 | 2022-10-23T22:09:30.000Z | null | false | 21d6d506cd6554ed5d501ecf3ff9057e3cee19ef | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263418/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263420 | 2022-10-23T23:26:46.000Z | null | false | 45863e98e30abf429c3674f303b30e6b12a96c49 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263420/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263419 | 2022-10-23T22:49:05.000Z | null | false | 9afc868b3ca6999fce836cdddbf46b9a034dcb9a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263419/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263421 | 2022-10-24T02:54:44.000Z | null | false | 5b2acfeeae4274be62c8f9a05acea1b1b33b63b8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263421/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263422 | 2022-10-24T06:32:10.000Z | null | false | f87ed8be2923f9a467f70386ba48da3cab41992f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jeffdshen/redefine_math0_8shot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263422/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-66b_eval
metrics: []
dataset_name: jeffdshen/redefine_math0_8shot
dataset_config: jeffdshen--redefine_math0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
joshtobin | null | null | null | false | 2 | false | joshtobin/malicious_urls | 2022-10-23T23:28:01.000Z | null | false | afaaca07fb88eeecf10689a1b9c35b2a143dd599 | [] | [] | https://huggingface.co/datasets/joshtobin/malicious_urls/resolve/main/README.md | ---
dataset_info:
features:
- name: url_len
dtype: int64
- name: abnormal_url
dtype: int64
- name: https
dtype: int64
- name: digits
dtype: int64
- name: letters
dtype: int64
- name: shortening_service
dtype: int64
- name: ip_address
dtype: int64
- name: '@'
dtype: int64
- name: '?'
dtype: int64
- name: '-'
dtype: int64
- name: '='
dtype: int64
- name: .
dtype: int64
- name: '#'
dtype: int64
- name: '%'
dtype: int64
- name: +
dtype: int64
- name: $
dtype: int64
- name: '!'
dtype: int64
- name: '*'
dtype: int64
- name: ','
dtype: int64
- name: //
dtype: int64
splits:
- name: train
num_bytes: 32000
num_examples: 200
download_size: 9837
dataset_size: 32000
---
# Dataset Card for "malicious_urls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
salascorp | null | null | null | false | null | false | salascorp/prueba2 | 2022-10-23T23:15:03.000Z | null | false | 44a2b42b814f780978d8361080fd108504ad31b2 | [] | [] | https://huggingface.co/datasets/salascorp/prueba2/resolve/main/README.md | |
ricecake | null | null | null | false | 2 | false | ricecake/genshin-nahida-kr-tts | 2022-10-24T01:04:01.000Z | null | false | 4fe8a049385f54a1a93658a8596338ff1349a14c | [] | [
"license:cc-by-nc-3.0"
] | https://huggingface.co/datasets/ricecake/genshin-nahida-kr-tts/resolve/main/README.md | ---
license: cc-by-nc-3.0
---
[english]
this is voice dataset of nahida korean voice in genshin impact |
svjack | null | null | null | false | 145 | false | svjack/pokemon-blip-captions-en-zh | 2022-10-31T06:23:03.000Z | null | false | 4b2859096f19a75f613a7a63183a9fadaa48ba3f | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language:zh",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/svjack/pokemon-blip-captions-en-zh/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
- zh
language_creators:
- other
multilinguality:
- multilingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Chinese captions are translated by [Deepl](https://www.deepl.com/translator) |
JesusMaginge | null | null | null | false | null | false | JesusMaginge/modelo.de.entrenamiento | 2022-10-24T02:04:28.000Z | null | false | 516ffa2561b51edf85c47b390162cbfc5a117710 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/JesusMaginge/modelo.de.entrenamiento/resolve/main/README.md | ---
license: openrail
---
|
ionghin | null | null | null | false | 15 | false | ionghin/digimon-blip-captions | 2022-10-24T02:31:17.000Z | null | false | 8675196154344395b65903c074a56404326f0945 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/ionghin/digimon-blip-captions/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
jaimebw | null | null | null | false | null | false | jaimebw/test | 2022-10-24T03:42:18.000Z | null | false | aae7557e27746477eb8c0ddb5af04f104edd5f87 | [] | [
"license:mit"
] | https://huggingface.co/datasets/jaimebw/test/resolve/main/README.md | ---
license: mit
---
|
declare-lab | null | null | null | false | 2 | false | declare-lab/MELD | 2022-10-24T04:48:06.000Z | null | false | 9abc51ee7903424ffb971297608aa6d3d0de3bfa | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/declare-lab/MELD/resolve/main/README.md | ---
license: gpl-3.0
---
|
SDbiaseval | null | null | null | false | null | false | SDbiaseval/embeddings | 2022-11-15T19:50:16.000Z | null | false | b1b40c6684c93971ddda3cd200fd134267442be8 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/SDbiaseval/embeddings/resolve/main/README.md | ---
license: apache-2.0
---
|
dustflover | null | null | null | false | null | false | dustflover/rebecca | 2022-10-25T00:29:13.000Z | null | false | 933c432110089d30a0db7225598f9977e0055de4 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/dustflover/rebecca/resolve/main/README.md | ---
license: unknown
---
|
Damitrius | null | null | null | false | null | false | Damitrius/Tester | 2022-10-24T07:17:44.000Z | null | false | da73ea4e703a8eef8b4b6172a2a258a28079851a | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Damitrius/Tester/resolve/main/README.md | ---
license: unknown
---
|
paraphraser | null | null | null | false | null | false | paraphraser/first_data | 2022-10-24T08:12:57.000Z | null | false | 9b1bd372799bcb31783210c1ec8f93ff45db4d7c | [] | [
"license:other"
] | https://huggingface.co/datasets/paraphraser/first_data/resolve/main/README.md | ---
license: other
---
|
jbpark0614 | null | null | null | false | null | false | jbpark0614/speechocean762_train | 2022-10-24T08:58:04.000Z | null | false | 08ef5a71e9a1381eb205610dda214a5b01e3e55a | [] | [] | https://huggingface.co/datasets/jbpark0614/speechocean762_train/resolve/main/README.md | ---
dataset_info:
features:
- name: index
dtype: int64
- name: speaker_id_str
dtype: int64
- name: speaker_id
dtype: int64
- name: question_id
dtype: int64
- name: total_score
dtype: int64
- name: accuracy
dtype: int64
- name: completeness
dtype: float64
- name: fluency
dtype: int64
- name: prosodic
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
- name: path
dtype: string
splits:
- name: train
num_bytes: 290407029.0
num_examples: 2500
download_size: 316008757
dataset_size: 290407029.0
---
# Dataset Card for "speechocean762_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Aunsiels | null | null | null | false | 174 | false | Aunsiels/InfantBooks | 2022-10-24T11:20:01.000Z | null | false | 7d9d2774a2abed6351ffaddbee0fdb34d7196457 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:crowdsourced",
"license:gpl",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:research paper",
"tags:kids",
"tags:children",
"tags:books",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/Aunsiels/InfantBooks/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- gpl
multilinguality:
- monolingual
pretty_name: InfantBooks
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- research paper
- kids
- children
- books
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for InfantBooks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://www.mpi-inf.mpg.de/children-texts-for-commonsense](https://www.mpi-inf.mpg.de/children-texts-for-commonsense)
- **Paper:** Do Children Texts Hold The Key To Commonsense Knowledge?
### Dataset Summary
A dataset of infants/children's books.
### Languages
All the books are in English;
## Dataset Structure
### Data Instances
malis-friend_BookDash-FKB.txt,"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself..."
### Data Fields
- title: The title of the book
- content: The content of the book
## Dataset Creation
### Curation Rationale
The goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.
### Source Data
#### Initial Data Collection and Normalization
We automatically collected kids' books on the web.
#### Who are the source language producers?
Native speakers.
### Citation Information
```
Romero, J., & Razniewski, S. (2022).
Do Children Texts Hold The Key To Commonsense Knowledge?
In Proceedings of the 2022 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.
```
|
jbpark0614 | null | null | null | false | null | false | jbpark0614/speechocean762_test | 2022-10-24T08:58:50.000Z | null | false | d317974c2e9cf1b847048c49f36760808b2337f6 | [] | [] | https://huggingface.co/datasets/jbpark0614/speechocean762_test/resolve/main/README.md | ---
dataset_info:
features:
- name: index
dtype: int64
- name: speaker_id_str
dtype: int64
- name: speaker_id
dtype: int64
- name: question_id
dtype: int64
- name: total_score
dtype: int64
- name: accuracy
dtype: int64
- name: completeness
dtype: float64
- name: fluency
dtype: int64
- name: prosodic
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
- name: path
dtype: string
splits:
- name: train
num_bytes: 288402967.0
num_examples: 2500
download_size: 295709940
dataset_size: 288402967.0
---
# Dataset Card for "speechocean762_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jbpark0614 | null | null | null | false | 513 | false | jbpark0614/speechocean762 | 2022-10-24T09:43:54.000Z | null | false | 8d49c25cba65077c093016cbed51e087f88af77c | [] | [] | https://huggingface.co/datasets/jbpark0614/speechocean762/resolve/main/README.md | ---
dataset_info:
features:
- name: index
dtype: int64
- name: speaker_id_str
dtype: int64
- name: speaker_id
dtype: int64
- name: question_id
dtype: int64
- name: total_score
dtype: int64
- name: accuracy
dtype: int64
- name: completeness
dtype: float64
- name: fluency
dtype: int64
- name: prosodic
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
- name: path
dtype: string
splits:
- name: test
num_bytes: 288402967.0
num_examples: 2500
- name: train
num_bytes: 290407029.0
num_examples: 2500
download_size: 0
dataset_size: 578809996.0
---
# Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: https://github.com/jimbozhang/speechocean762
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
projecte-aina | null | null | null | false | null | false | projecte-aina/Parafraseja | 2022-11-16T16:20:00.000Z | null | false | c5c5e5de992ff00ab3e16c6282122149de1100da | [] | [
"annotations_creators:CLiC-UB",
"language_creators:found",
"language:ca",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"task_categories:text-classification",
"task_ids:multi-input-text-classification"
] | https://huggingface.co/datasets/projecte-aina/Parafraseja/resolve/main/README.md | ---
annotations_creators:
- CLiC-UB
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: Parafraseja
size_categories:
- ?
task_categories:
- text-classification
task_ids:
- multi-input-text-classification
---
# Dataset Card for Parafraseja
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
The dataset consists of pairs of sentences labelled with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* dev.json: 2,000 examples
* test.json: 4,000 examples
* train.json: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
|
KETI-AIR | null | There is no citation information | # 요약문 및 레포트 생성데이터
## 소개
다양한 한국어 원문 데이터로부터 정제된 추출 및 생성 요약문을 도출하고 검증한 한국어 문서요약 AI 데이터셋으로, 추출요약을 포함하여 본문에서 중요한 문장을 하나의 새로운 요약문으로 창조하는 생성요약(Abstractive Summarization)을 위한 데이터 세트를 구축하고 이를 실제 모델에 학습
## 구축목적
다양한 문서유형의 한국어 원문으로부터 추출요약문과 생성요약문을 도출해낼 수 있도록 인공지능을 훈련하기 위한 데이터셋
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_summary_and_report.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 김정민 이사 | 02-3404-7237 | kris.kim@wisenut.co.kr |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 27 | false | KETI-AIR/aihub_summary_and_report | 2022-10-31T06:08:09.000Z | null | false | 360fa369dc9acc720e69e036a1d3a0e88936e088 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_summary_and_report/resolve/main/README.md | ---
license: apache-2.0
---
|
pcoloc | null | null | null | false | null | false | pcoloc/autotrain-data-dragino-7-7-max_495m | 2022-10-24T10:10:04.000Z | null | false | 6af8474d307a30b92b0cc8d550dbf98f4f5d3c85 | [] | [] | https://huggingface.co/datasets/pcoloc/autotrain-data-dragino-7-7-max_495m/resolve/main/README.md | ---
{}
---
# AutoTrain Dataset for project: dragino-7-7-max_495m
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dragino-7-7-max_495m.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_rssi": -91,
"feat_snr": 7.5,
"target": 125.0
},
{
"feat_rssi": -96,
"feat_snr": 5.0,
"target": 125.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_rssi": "Value(dtype='int64', id=None)",
"feat_snr": "Value(dtype='float64', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 853 |
| valid | 286 |
|
projecte-aina | null | null | null | false | null | false | projecte-aina/GuiaCat | 2022-11-10T12:22:44.000Z | null | false | 7060a1427c2e00810ecf9897af35b78250420f00 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ca",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring"
] | https://huggingface.co/datasets/projecte-aina/GuiaCat/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: GuiaCat
size_categories:
- ?
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
---
# Dataset Card for GuiaCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by [GuiaCat](https://guiacat.cat) and curated by the BSC.
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
```
7,7,7,7,7.0,"Aquest restaurant té una llarga història. Ara han tornat a canviar d'amos i aquest canvi s'ha vist molt repercutit en la carta, preus, servei, etc. Hi ha molta varietat de menjar, i tot boníssim, amb especialitats molt ben trobades. El servei molt càlid i agradable, dóna gust que et serveixin així. I la decoració molt agradable també, bastant curiosa. En fi, pel meu gust, un bon restaurant i bé de preu.",bo
8,9,8,7,8.0,"Molt recomanable en tots els sentits. El servei és molt atent, pulcre i gens agobiant; alhora els plats també presenten un aspecte acurat, cosa que fa, juntament amb l'ambient, que t'oblidis de que, malauradament, està situat pròxim a l'autopista.Com deia, l'ambient és molt acollidor, té un menjador principal molt elegant, perfecte per quedar bé amb tothom!Tot i això, destacar la bona calitat / preu, ja que aquest restaurant té una carta molt extensa en totes les branques i completa, tant de menjar com de vins. Pel qui entengui de vins, podriem dir que tot i tenir una carta molt rica, es recolza una mica en els clàssics.",molt bo
```
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* dev.csv: 500 examples
* test.csv: 500 examples
* train.csv: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by [GuiaCat](https://guiacat.cat).
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
```
### Contributions
We want to thank GuiaCat for providing this data.
|
darrow-ai | null | null | null | false | 29 | false | darrow-ai/USClassActions | 2022-11-06T12:34:48.000Z | null | false | e274c1e7403c0da06b3ef90f788a85e23ebe0ffc | [] | [
"arxiv:2211.00582",
"license:gpl-3.0"
] | https://huggingface.co/datasets/darrow-ai/USClassActions/resolve/main/README.md | ---
license: gpl-3.0
---
## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai)
### Dataset Summary
USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActions')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`target_text`: (**str**) the complaint text \
`verdict`: (**str**) the outcome of the case \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{Darrow-Niklaus-2022,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi, EMNLP2022},
}
```
|
Aunsiels | null | null | null | false | null | false | Aunsiels/Quasimodo | 2022-10-24T12:30:23.000Z | null | false | 91c9f5f11a05c71bc9a2a44628ce04d0b39d9cf0 | [] | [
"annotations_creators:machine-generated",
"language:en",
"language_creators:machine-generated",
"license:cc-by-2.0",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"tags:knowledge base",
"tags:commonsense",
"task_categories:question-answering",
"task_ids:closed-domain-qa"
] | https://huggingface.co/datasets/Aunsiels/Quasimodo/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-2.0
multilinguality:
- monolingual
pretty_name: Quasimodo
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- knowledge base
- commonsense
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card for Quasimodo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo
- **Repository:** https://github.com/Aunsiels/CSK
- **Paper:** Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
### Dataset Summary
A commonsense knowledge base constructed automatically from question-answering forums and query logs.
### Supported Tasks and Leaderboards
Can be useful for tasks requiring external knowledge such as question answering.
### Languages
English
## Dataset Structure
### Data Instances
```python
{
"subject": "elephant",
"predicate": "has_body_part"
"object": "trunk",
"modality": "TBC[so long trunks] x#x2 // TBC[long trunks] x#x9 // TBC[big trunks] x#x6 // TBC[long trunk] x#x1 // TBC[such big trunks] x#x1 0 0.9999667967035647 elephants have trunks x#x34 x#xGoogle Autocomplete, Bing Autocomplete, Yahoo Questions, Answers.com Questions, Reddit Questions // a elephants have trunks x#x2 x#xGoogle Autocomplete // a elephant have a trunk x#x2 x#xGoogle Autocomplete // elephants have so long trunks x#x2 x#xGoogle Autocomplete // elephants have long trunks x#x8 x#xGoogle Autocomplete, Yahoo Questions, Answers.com Questions // elephants have big trunks x#x6 x#xGoogle Autocomplete, Answers.com Questions, Reddit Questions // elephants have trunk x#x3 x#xGoogle Autocomplete, Yahoo Questions // elephant have long trunks x#x1 x#xGoogle Autocomplete // elephant has a trunk x#x1 x#xGoogle Autocomplete // elephants have a trunk x#x2 x#xAnswers.com Questions // an elephant has a long trunk x#x1 x#xAnswers.com Questions // elephant have trunks x#x1 x#xAnswers.com Questions // elephants have such big trunks x#x1 x#xReddit Questions",
"score": 0.9999667967668732,
"local_sigma": 1.0
}
```
### Data Fields
- subject: The subject of the triple
- predicate: The predicate of the triple
- object: The object of the triple
- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects
- is_negative: 1 if the statement was negated
- score: salience score of the supervised scoring model
- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.
## Dataset Creation
See original paper.
## Additional Information
### Licensing Information
CC-BY 2.0
### Citation Information
Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-conll2003-conll2003-623e8b-1865063750 | 2022-10-24T15:03:21.000Z | null | false | 326a090671e5d16285a76878114dc54704a26e4b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-conll2003-conll2003-623e8b-1865063750/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: dslim/bert-large-NER
metrics: []
dataset_name: conll2003
dataset_config: conll2003
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: dslim/bert-large-NER
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@rdecoupes](https://huggingface.co/rdecoupes) for evaluating this model. |
LHF | null | null | null | false | null | false | LHF/l3d | 2022-10-30T19:59:54.000Z | null | false | 1a60e7dd9eb88961eda78db4639798ddceb9269e | [] | [] | https://huggingface.co/datasets/LHF/l3d/resolve/main/README.md | # Large Labelled Logo Dataset |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/flame_surge_style | 2022-10-24T19:39:09.000Z | null | false | b00dc249a422f746fa6f3fe520e9dc1948b827f1 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/flame_surge_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Flame Surge Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by flame_surge_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/GwRM6jf.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/vueZJGB.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/GnscYKw.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/VOyrp21.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/KlpeUpB.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
andrewkroening | null | null | null | false | 94 | false | andrewkroening/Star-wars-scripts-dialogue-IV-VI | 2022-10-27T17:53:39.000Z | null | false | 0fe3d57b821a925081220f954b454f10ace87af8 | [] | [
"license:cc"
] | https://huggingface.co/datasets/andrewkroening/Star-wars-scripts-dialogue-IV-VI/resolve/main/README.md | ---
license: cc
---
### Dataset Contents
This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.
### Dataset Disclaimer
I don't own this data; or Star Wars. But it would be cool if I did.
Star Wars is owned by Lucasfilms. I do not own any of the rights to this information.
The scripts are derived from a couple sources:
* This [GitHub Repo](https://github.com/gastonstat/StarWars) with raw files
* A [Kaggle Dataset](https://www.kaggle.com/datasets/xvivancos/star-wars-movie-scripts) put together by whoever 'Xavier' is
### May the Force be with you |
ACOSharma | null | null | null | false | null | false | ACOSharma/literature | 2022-10-28T15:38:43.000Z | null | false | 61f49d80d69c6208a9bfffb1cab4b98c9a9accf8 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/ACOSharma/literature/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
# Literature Dataset
## Files
A dataset containing novels, epics and essays.
The files are as follows:
- main.txt, a file with all the texts, every text on a newline, all English
- vocab.txt, a file with the trained (BERT) vocab, a newline a new word
- train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)
- test.csv, the test split in the same way, 5,417 samples (698,793 tokens)
- DatasetDistribution.png, a file with all the texts and a plot with character length
## Texts
The texts used are these:
- Wuthering Heights
- Ulysses
- Treasure Island
- The War of the Worlds
- The Republic
- The Prophet
- The Prince
- The Picture of Dorian Gray
- The Odyssey
- The Great Gatsby
- The Brothers Karamazov
- Second Treatise of Goverment
- Pride and Prejudice
- Peter Pan
- Moby Dick
- Metamorphosis
- Little Women
- Les Misérables
- Japanese Girls and Women
- Iliad
- Heart of Darkness
- Grimms' Fairy Tales
- Great Expectations
- Frankenstein
- Emma
- Dracula
- Don Quixote
- Crime and Punishment
- Christmas Carol
- Beyond Good and Evil
- Anna Karenina
- Adventures of Sherlock Holmes
- Adventures of Huckleberry Finn
- Adventures in Wonderland
- A Tale of Two Cities
- A Room with A View |
tramzel | null | null | null | false | 29 | false | tramzel/fndds | 2022-10-24T23:14:22.000Z | null | false | b97a2f9f26e3f520994730d5a3fa4002294dba0b | [] | [
"license:unknown"
] | https://huggingface.co/datasets/tramzel/fndds/resolve/main/README.md | ---
license: unknown
---
|
SickBoy | null | @article{Jaume2019FUNSDAD,
title={FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents},
author={Guillaume Jaume and H. K. Ekenel and J. Thiran},
journal={2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)},
year={2019},
volume={2},
pages={1-6}
} | https://guillaumejaume.github.io/FUNSD/ | false | 51 | false | SickBoy/layout_documents | 2022-10-26T03:12:05.000Z | null | false | 66f4b74f4674267c30df8a5ed334d7e90cb59c1c | [] | [
"license:openrail"
] | https://huggingface.co/datasets/SickBoy/layout_documents/resolve/main/README.md | ---
license: openrail
---
|
iejMac | null | null | null | false | null | false | iejMac/CLIP-MSR-VTT | 2022-10-31T05:03:18.000Z | null | false | c404fe3052627c0d9bc1ea0b5aacab33507364d5 | [] | [
"license:mit"
] | https://huggingface.co/datasets/iejMac/CLIP-MSR-VTT/resolve/main/README.md | ---
license: mit
---
|
poloclub | null | @article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
} | DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the interplay between prompts and generative models, detecting deepfakes, and
designing human-AI interaction tools to help users more easily use these models. | false | 1,485 | false | poloclub/diffusiondb | 2022-11-15T21:41:34.000Z | null | false | 100e6df7a779ef015ff6f2c4c93284466afb06cc | [] | [
"arxiv:2210.14896",
"layout:default",
"title:Home",
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"license:cc0-1.0",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:original",
"tags:stable diffusion",
"tags:prompt engineering",
"tags:prompts",
"tags:research paper",
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/README.md | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB
size_categories:
- n>1T
source_datasets:
- original
tags:
- stable diffusion
- prompt engineering
- prompts
- research paper
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB
<img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Two Subsets](#two-subsets)
- [Key Differences](#key-differences)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
- [Usage/Examples](#usageexamples)
- [Downloading a single file](#downloading-a-single-file)
- [Downloading a range of files](#downloading-a-range-of-files)
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
- **Point of Contact:** [Jay Wang](mailto:jayw@gatech.edu)
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
```bash
# DiffusionDB 2M
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
```bash
# DiffusionDB Large
./
├── diffusiondb-large-part-1
│ ├── part-000001
│ │ ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
│ │ ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
│ │ ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-010000
├── diffusiondb-large-part-2
│ ├── part-010001
│ │ ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
│ │ ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
│ │ ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-010002
│ ├── part-010003
│ ├── [...]
│ └── part-014000
└── metadata-large.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://i.imgur.com/gqWcRs2.png">
```json
{
"f3501e05-aef7-4225-a9e9-f516527408ac.png": {
"p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ",
"se": 38753269,
"c": 12.0,
"st": 50,
"sa": "k_lms"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Prompt
- `se`: Random seed
- `c`: CFG Scale (guidance scale)
- `st`: Steps
- `sa`: Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` and `metatable-large.parquet` share the same schema.
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`prompt`|`string`|The text prompt used to generate this image.|
|`part_id`|`uint16`|Folder ID of this image.|
|`seed`|`uint32`| Random seed used to generate this image.|
|`step`|`uint16`| Step count (hyperparameter).|
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|`width`|`uint16`|Image width.|
|`height`|`uint16`|Image height.|
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
|`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `large_random_1k` subset
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
```
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
- `-z` `--unzip` - Unzip the file/files after downloading
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
```bash
python download.py -i 23
```
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
```bash
python download.py -i 1 -r 2000
```
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
```bash
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
```
Again, the script will automatically add the `/` between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
```bash
python download.py -i 1 -r 2000 -z
```
#### Method 3. Use `metadata.parquet` (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
```python
from urllib.request import urlretrieve
import pandas as pd
# Download the parquet table
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
urlretrieve(table_url, 'metadata.parquet')
# Read the table using Pandas
metadata_df = pd.read_parquet('metadata.parquet')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
|
workitos | null | null | null | false | null | false | workitos/SD_Anime_Characters_Repository | 2022-11-11T10:20:30.000Z | null | false | b9bf171f5074372f246208f7c42ff581dfe85e93 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/workitos/SD_Anime_Characters_Repository/resolve/main/README.md | ---
license: unknown
---
|
erya | null | null | null | false | null | false | erya/1111 | 2022-10-25T02:28:35.000Z | null | false | 316f42386810b2f6ed884e884b05cdc085821a05 | [] | [
"license:other"
] | https://huggingface.co/datasets/erya/1111/resolve/main/README.md | ---
license: other
---
|
niurl | null | null | null | false | null | false | niurl/eraser_cose | 2022-10-25T03:22:37.000Z | null | false | 37b04e9237bdfaba2f149f437f104f63a6d4f25a | [] | [] | https://huggingface.co/datasets/niurl/eraser_cose/resolve/main/README.md | ---
dataset_info:
features:
- name: doc_id
dtype: string
- name: question
sequence: string
- name: query
dtype: string
- name: evidence_span
sequence:
sequence: int64
- name: classification
dtype: string
splits:
- name: test
num_bytes: 282071
num_examples: 1079
- name: train
num_bytes: 2316094
num_examples: 8752
- name: val
num_bytes: 288029
num_examples: 1086
download_size: 1212369
dataset_size: 2886194
---
# Dataset Card for "eraser_cose"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NightMachinery | null | null | null | false | 1 | false | NightMachinery/irc_chat_log_1 | 2022-10-25T05:29:03.000Z | null | false | eaf9d1f06ca1c8ca18560bf7b9ac6f5002528850 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/NightMachinery/irc_chat_log_1/resolve/main/README.md | ---
license: apache-2.0
---
|
NightMachinery | null | null | null | false | null | false | NightMachinery/irc_chat_log_1_tmp_normal | 2022-10-25T05:32:19.000Z | null | false | cd4998d361a63ed92f9a2b0e8cea93a2bd574c27 | [] | [] | https://huggingface.co/datasets/NightMachinery/irc_chat_log_1_tmp_normal/resolve/main/README.md | ---
dataset_info:
features:
- name: text_raw
dtype: string
- name: channel
dtype: string
- name: username
dtype: string
- name: time
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 308201948
num_examples: 1615682
download_size: 166578792
dataset_size: 308201948
---
# Dataset Card for "irc_chat_log_1_tmp_normal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NightMachinery | null | null | null | false | null | false | NightMachinery/irc_chat_log_1_dateless | 2022-10-25T06:14:38.000Z | null | false | 9b10a97096ac5721af6812dc257c5842fc1b2017 | [] | [] | https://huggingface.co/datasets/NightMachinery/irc_chat_log_1_dateless/resolve/main/README.md | ---
dataset_info:
features:
- name: text_raw
dtype: string
- name: channel
dtype: string
- name: type
dtype: string
- name: username
dtype: string
- name: time
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4162978394.7700057
num_examples: 21627244
download_size: 1617316833
dataset_size: 4162978394.7700057
---
# Dataset Card for "irc_chat_log_1_tmp_dateless"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anab | null | null | null | false | 1 | false | anab/copa-sse | 2022-10-26T01:53:17.000Z | null | false | 2192eb5fc49e5dda28d7e3ea9aa4cd35ab00ef5b | [] | [
"arxiv:2201.06777",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:crowdsourced",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"tags:commonsense reasoning",
"tags:explanation",
"tags:graph-based reasoning",
"task_categories:text2text-generation",
"task_categories:multiple-choice",
"task_ids:explanation-generation"
] | https://huggingface.co/datasets/anab/copa-sse/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: Semi-structured Explanations for Commonsense Reasoning
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- commonsense reasoning
- explanation
- graph-based reasoning
task_categories:
- text2text-generation
- multiple-choice
task_ids:
- explanation-generation
---
# Dataset Card for COPA-SSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/a-brassard/copa-sse
- **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777)
- **Point of Contact:** [Ana Brassard](mailto:ana.brassard@riken.jp)
### Dataset Summary

COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts.
### Supported Tasks and Leaderboards
Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).
### Languages
English
## Dataset Structure
### Data Instances
Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.
### Data Fields
Each entry contains:
- the original question (matching format and ids)
- `human-explanations`: a list of explanations each containing:
- `expl-id`: the explanation id
- `text`: the explanation in plain text (full sentences)
- `worker-id`: anonymized worker id (the author of the explanation)
- `worker-avg`: the average score the author got for their explanations
- `all-ratings`: all collected ratings for the explanation
- `filtered-ratings`: ratings excluding those that failed the control
- `triples`: the triple-form explanation (a list of ConceptNet-like triples)
Example entry:
```
id: 1,
asks-for: cause,
most-plausible-alternative: 1,
p: "My body cast a shadow over the grass.",
a1: "The sun was rising.",
a2: "The grass was cut.",
human-explanations: [
{expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230,
text: "Sunrise causes casted shadows.",
worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b,
worker-avg: 3.5832864694635025,
all-ratings: [1, 3, 3, 4, 3],
filtered-ratings: [3, 3, 4, 3],
filtered-avg-rating: 3.25,
triples: [["sunrise", "Causes", "casted shadows"]]
}, ...]
```
### Data Splits
Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.
## Dataset Creation
### Curation Rationale
The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.
### Source Data
#### Initial Data Collection and Normalization
The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.
#### Who are the source language producers?
The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.
### Annotations
#### Annotation process
Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.
#### Who are the annotators?
The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.
### Discussion of Biases
COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.
### Other Known Limitations
The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.
## Additional Information
### Dataset Curators
This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.
### Licensing Information
COPA-SSE is released under the [MIT License](https://mit-license.org/).
### Citation Information
```
@InProceedings{copa-sse:LREC2022,
author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro},
title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {3994--4000},
url = {https://aclanthology.org/2022.lrec-1.425}
}
```
### Contributions
Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset. |
AbderrahmanSkiredj1 | null | null | null | false | 21 | false | AbderrahmanSkiredj1/Arabic_Common_sense_validation | 2022-10-25T09:33:23.000Z | null | false | b0de5b1be2da9e094b0e25cc50f2b36d7050ea7e | [] | [] | https://huggingface.co/datasets/AbderrahmanSkiredj1/Arabic_Common_sense_validation/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1421339
num_examples: 20000
- name: validation
num_bytes: 134514
num_examples: 2000
download_size: 771396
dataset_size: 1555853
---
# Dataset Card for "Arabic_Common_sense_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
findzebra | null | null | null | false | 318 | false | findzebra/corpus | 2022-10-25T09:58:33.000Z | null | false | d2ee25d7fb18334d410a678499a94afede8ec4f4 | [] | [] | https://huggingface.co/datasets/findzebra/corpus/resolve/main/README.md | # FindZebra corpus
A collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)).
## Preprocessing
The raw HTML content of each article has been processed using the following code (`text` column):
```python
# Preprocessing code
import math
import html2text
parser = html2text.HTML2Text()
parser.ignore_links = True
parser.ignore_images = True
parser.ignore_tables = True
parser.ignore_emphasis = True
parser.body_width = math.inf
parser.body_width = math.inf
article_text = parser.handle(article_html)
``` |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/lightning_style | 2022-10-25T10:05:17.000Z | null | false | 91b1380fc7ff16a970b8b240e56c427b5638087a | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/lightning_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Lightning Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by lightning_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/HNHRcZg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8B31Umz.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/88sHalA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/WhlLomb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/a1Usv3u.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
findzebra | null | null | null | false | 5 | false | findzebra/queries | 2022-10-25T10:02:34.000Z | null | false | 8552aab8a6e2bb55739fba702171fd1a4a12d181 | [] | [] | https://huggingface.co/datasets/findzebra/queries/resolve/main/README.md | # FindZebra Queries
A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)). In a retrieval setting, the task consists of retrieving an article from the [FindZebra corpus](https://huggingface.co/datasets/findzebra/corpus) with a CUI that matches the query CUI. |
juanhebert | null | null | null | false | 3 | false | juanhebert/sv_corpora_parliament_processed | 2022-11-03T10:21:27.000Z | null | false | 25700c3e831b26e4224a7c14b226e8cccdf3839f | [] | [] | https://huggingface.co/datasets/juanhebert/sv_corpora_parliament_processed/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 292359009
num_examples: 1892723
download_size: 158940474
dataset_size: 292359009
---
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ludovicoderic | null | null | null | false | null | false | ludovicoderic/alice_test | 2022-10-27T12:13:09.000Z | null | false | 16f88ef7d299b7e1618f2c432ff04df431d10222 | [] | [] | https://huggingface.co/datasets/ludovicoderic/alice_test/resolve/main/README.md | |
darrow-ai | null | null | null | false | null | false | darrow-ai/USClassActionOutcomes_ExpertsAnnotations | 2022-11-06T12:35:30.000Z | null | false | 155b325de98e02bb6286fce64282d2c4c30a1b41 | [] | [
"arxiv:2211.00582",
"license:gpl-3.0"
] | https://huggingface.co/datasets/darrow-ai/USClassActionOutcomes_ExpertsAnnotations/resolve/main/README.md | ---
license: gpl-3.0
---
## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai)
### Dataset Summary
USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActionOutcomes_ExpertsAnnotations')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`origin_label `: (**str**) the outcome of the case \
`target_text`: (**str**) the facts of the case \
`annotator_prediction `: (**str**) annotators predictions of the case outcome based on the target_text \
`annotator_confidence `: (**str**) the annotator's level of confidence in his outcome prediction \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{darrow-niklaus-2022-uscp,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi},
}
```
|
KETI-AIR | null | There is no citation information | # 문서 요약 말뭉치
## 소개
(버전 1.0) 문서에서 추출한 주제문과 문서를 요약한 글로 구성된 말뭉치입니다.
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"nikl_summarization.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## Documentation
[Link](https://rlkujwkk7.toastcdn.net/6/NIKL_SUMMARIZATION(v1.0).pdf) | false | 62 | false | KETI-AIR/nikl_summarization | 2022-10-31T06:07:43.000Z | null | false | 54b98fe3cefa0d99c15b29708e85dc6fc65bc0e1 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/nikl_summarization/resolve/main/README.md | ---
license: apache-2.0
---
|
vonewman | null | null | null | false | 3 | false | vonewman/word-embeddings-dataset | 2022-10-25T13:07:40.000Z | null | false | 6f2bcf9f0a73bd98dcd70443a21c67322cd04db4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/vonewman/word-embeddings-dataset/resolve/main/README.md | ---
license: mit
---
|
arias048 | null | null | null | false | null | false | arias048/myPictures | 2022-10-28T19:45:30.000Z | null | false | 48c38c625b1fdfd2f04b8788874509ddc3aa0af1 | [] | [
"license:other"
] | https://huggingface.co/datasets/arias048/myPictures/resolve/main/README.md | ---
license: other
---
|
lcampillos | null | null | null | false | 1 | false | lcampillos/CLARA-MeD | 2022-10-25T14:54:04.000Z | null | false | ee9af9cb8db048248c9a0665691bfc6903d09113 | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/lcampillos/CLARA-MeD/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
# Dataset Card for CLARA-MeD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://clara-nlp.uned.es/home/med/](https://clara-nlp.uned.es/home/med/)
- **Repository:** [https://github.com/lcampillos/CLARA-MeD](https://github.com/lcampillos/CLARA-MeD)
- **Paper:** [http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439)
- **DOI:** [https://doi.org/10.20350/digitalCSIC/14644](https://doi.org/10.20350/digitalCSIC/14644)
- **Point of Contact:** [Leonardo Campillos-Llanos](leonardo.campillos@csic.es)
### Dataset Summary
A parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making.
### Supported Tasks and Leaderboards
Medical text simplification
### Languages
Spanish
## Dataset Structure
### Data Instances
For each instance, there is a string for the source text (professional version), and a string for the target text (simplified version).
```
{'SOURCE': 'adenocarcinoma ductal de páncreas'
'TARGET': 'Cáncer de páncreas'}
```
### Data Fields
- `SOURCE`: a string containing the professional version.
- `TARGET`: a string containing the simplified version.
## Dataset Creation
### Source Data
#### Who are the source language producers?
1. Drug leaflets and summaries of product characteristics from [CIMA](https://cima.aemps.es)
2. Cancer-related information summaries from the [National Cancer Institute](https://www.cancer.gov/)
3. Clinical trials announcements from [EudraCT](https://www.clinicaltrialsregister.eu/)
### Annotations
#### Annotation process
Semi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement).
#### Who are the annotators?
Leonardo Campillos-Llanos
Adrián Capllonch-Carriónb
Ana Rosa Terroba-Reinares
Ana Valverde-Mateos
Sofía Zakhir-Puig
### Personal and Sensitive Information
No personal and sensitive information was used.
### Licensing Information
These data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License.
### Citation Information
Campillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196.
### Contributions
Thanks to [Jónathan Heras from Universidad de La Rioja](http://www.unirioja.es/cu/joheras) ([@joheras](https://github.com/joheras)) for formatting this dataset for Hugging Face.
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664175 | 2022-10-25T15:21:54.000Z | null | false | 7f368064f1df591ec2cba22cab730eb8e9a53104 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664175/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664174 | 2022-10-25T14:57:27.000Z | null | false | 193f68d798850e2a593c181844a60af8b12267ed | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664174/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664170 | 2022-10-25T14:30:11.000Z | null | false | 5f080cd1756fbe0260163aefce18f65dbd0231f4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664170/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664176 | 2022-10-25T16:42:14.000Z | null | false | 45ec734c3aa4ead5700762bee975f44b17e88c23 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664176/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664171 | 2022-10-25T14:31:02.000Z | null | false | 673278884406b493c92a897afdedd8b19d7778a9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664171/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Vecinito87 | null | null | null | false | null | false | Vecinito87/SD_IMG_POOL | 2022-10-25T15:07:47.000Z | null | false | ce2428a77872d198647fed39125b81a77dc71b1b | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Vecinito87/SD_IMG_POOL/resolve/main/README.md | ---
license: unknown
---
|
ajankelo | null | null | null | false | 2 | false | ajankelo/pklot_50 | 2022-10-28T14:39:22.000Z | null | false | 1fc8d17a6617ec0ea4d098ff55b497b6a40187ec | [] | [
"language:en",
"license:cc-by-4.0",
"tags:PKLot",
"tags:object detection"
] | https://huggingface.co/datasets/ajankelo/pklot_50/resolve/main/README.md | ---
language: en
license: cc-by-4.0
tags:
- PKLot
- object detection
---
# PKLot 50
This dataset comprises 50 fully annotated images. The original images are were introduced in [*PKLot – A robust dataset for parking lot classification*](https://www.inf.ufpr.br/lesoliveira/download/ESWA2015.pdf).
## Labeling Method
Labeling was manually completed using CVAT with the assistance of Voxel51 for inspection.
## Original dataset citation info
Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
|
katossky | null | null | null | false | 107 | false | katossky/wine-recognition | 2022-10-29T10:22:58.000Z | null | false | 4cb09996580bc8efbc747911f8eb5e96340ef5a4 | [] | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"license:unknown",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:tabular-classification",
"task_ids:tabular-multi-class-classification"
] | https://huggingface.co/datasets/katossky/wine-recognition/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language: []
language_creators:
- expert-generated
license:
- unknown
pretty_name: Wine Recognition Dataset
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- tabular-classification
task_ids:
- tabular-multi-class-classification
---
# Dataset Card for Wine Recognition dataset
## Dataset Description
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/wine
- **Papers:**
1. S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
2. S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
- **Point of Contact:** stefan'@'coral.cs.jcu.edu.au
### Dataset Summary
These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with "well behaved" class structures. A good data set for first testing of a new classifier, but not very challenging.
### Supported Tasks and Leaderboards
Classification (cultivar) from continuous variables (all other variables)
## Dataset Structure
### Data Instances
178 wines
### Data Fields
1. Wine category (cultivar)
2. Alcohol
3. Malic acid
4. Ash
5. Alcalinity of ash
6. Magnesium
7. Total phenols
8. Flavanoids
9. Nonflavanoid phenols
10. Proanthocyanins
11. Color intensity
12. Hue
13. OD280/OD315 of diluted wines
14. Proline
### Data Splits
None
## Dataset Creation
### Source Data
https://archive.ics.uci.edu/ml/datasets/wine
#### Initial Data Collection and Normalization
Original Owners:
Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.
## Additional Information
### Dataset Curators
Stefan Aeberhard
### Licensing Information
No information found on the original website |
eliasnaranjom | null | null | null | false | null | false | eliasnaranjom/entrenamiento | 2022-10-25T16:25:48.000Z | null | false | 82e32713ee2a94bb407c50c698b9a0e62cd19e59 | [] | [
"license:other"
] | https://huggingface.co/datasets/eliasnaranjom/entrenamiento/resolve/main/README.md | ---
license: other
---
|
Whispering-GPT | null | null | null | false | 28 | false | Whispering-GPT/test_whisper | 2022-11-15T20:18:21.000Z | null | false | 33d6757e9126043ff82d7032e4f76824afd388ea | [] | [] | https://huggingface.co/datasets/Whispering-GPT/test_whisper/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 44027
num_examples: 12
download_size: 30955
dataset_size: 44027
---
# Dataset Card for "test_whisper"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064213 | 2022-10-25T17:31:46.000Z | null | false | 68de10d8afbe20cad6c000a2553d533209fad025 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064213/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064214 | 2022-10-25T19:35:25.000Z | null | false | 7e69f670cfbb39f3508e80e451ce7b23670decad | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064214/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064210 | 2022-10-25T17:31:17.000Z | null | false | 4835a4ee92aee9bac60ad7dc8154c1f53d9ab40a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064210/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064212 | 2022-10-25T18:28:08.000Z | null | false | a5b40e34984ddd95bfeb302b23bcf53b95714bf7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064212/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064215 | 2022-10-25T17:44:32.000Z | null | false | b562e2007d01f1bafc34a270b018a1269e74ed9f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064215/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064209 | 2022-10-25T17:32:11.000Z | null | false | 399a3b63758d394fbf31111d478a13aaa3a4539d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064209/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
BrainArtLabs | null | null | null | false | null | false | BrainArtLabs/LiminalSourceDiffusionV1 | 2022-10-25T18:08:28.000Z | null | false | 61db59aee71d376d9096eb0f2f575e40ea6ae344 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/BrainArtLabs/LiminalSourceDiffusionV1/resolve/main/README.md | ---
license: cc-by-4.0
---
|
ashraq | null | null | null | false | 2 | false | ashraq/financial-news-articles | 2022-10-25T18:01:06.000Z | null | false | 9920e8130b63513c598a6cdde10df3e2728bccef | [] | [] | https://huggingface.co/datasets/ashraq/financial-news-articles/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 848347009
num_examples: 306242
download_size: 492243206
dataset_size: 848347009
---
# Dataset Card for "financial-news-articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://www.kaggle.com/datasets/jeet2016/us-financial-news-articles) |
tkuye | null | null | null | false | null | false | tkuye/resuparse | 2022-10-25T22:09:47.000Z | null | false | 1697e92453b1870cacf8c0212bb892d1b5a7f5ce | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/tkuye/resuparse/resolve/main/README.md | ---
license: apache-2.0
---
|
tomekkorbak | null | null | null | false | 10 | false | tomekkorbak/code_search_data-pep8 | 2022-10-25T19:44:10.000Z | null | false | d57e1e36be67089516b1a173bdfe1ddc74d00d12 | [] | [] | https://huggingface.co/datasets/tomekkorbak/code_search_data-pep8/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
- name: func_path_in_repository
dtype: string
- name: func_name
dtype: string
- name: whole_func_string
dtype: string
- name: language
dtype: string
- name: func_code_string
dtype: string
- name: func_code_tokens
sequence: string
- name: func_documentation_string
dtype: string
- name: func_documentation_tokens
sequence: string
- name: split_name
dtype: string
- name: func_code_url
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 1373345211.3356366
num_examples: 362178
- name: train
num_bytes: 189595338.66436344
num_examples: 50000
download_size: 695684763
dataset_size: 1562940550.0
---
# Dataset Card for "code_search_data-pep8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tomekkorbak | null | null | null | false | 4 | false | tomekkorbak/codeparrot-pep8-scored | 2022-10-25T20:14:40.000Z | null | false | 9383a22eb926bd0335a2ad67f642b75b7f2ac33d | [] | [] | https://huggingface.co/datasets/tomekkorbak/codeparrot-pep8-scored/resolve/main/README.md | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: copies
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
- name: hash
dtype: int64
- name: line_mean
dtype: float64
- name: line_max
dtype: int64
- name: alpha_frac
dtype: float64
- name: autogenerated
dtype: bool
- name: ratio
dtype: float64
- name: config_test
dtype: bool
- name: has_no_keywords
dtype: bool
- name: few_assignments
dtype: bool
- name: score
dtype: float64
splits:
- name: test
num_bytes: 1556261021.25
num_examples: 150000
- name: train
num_bytes: 518753673.75
num_examples: 50000
download_size: 771399764
dataset_size: 2075014695.0
---
# Dataset Card for "codeparrot-pep8-scored"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lipaoMai | null | null | null | false | null | false | lipaoMai/github-issues | 2022-10-25T20:17:38.000Z | null | false | e64c6762a193e9c8b2bf95454422a560b1c5ca87 | [] | [] | https://huggingface.co/datasets/lipaoMai/github-issues/resolve/main/README.md | ---
dataset_info:
features:
- name: patient_id
dtype: int64
- name: drugName
dtype: string
- name: condition
dtype: string
- name: review
dtype: string
- name: rating
dtype: float64
- name: date
dtype: string
- name: usefulCount
dtype: int64
splits:
- name: test
num_bytes: 28367208
num_examples: 53471
- name: train
num_bytes: 85172055
num_examples: 160398
download_size: 63481104
dataset_size: 113539263
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lipaoMai | null | null | null | false | null | false | lipaoMai/drug_one_1dataset | 2022-10-25T20:27:56.000Z | null | false | 0da2571fe18ccc3748f7f202ee300a5824b33e37 | [] | [] | https://huggingface.co/datasets/lipaoMai/drug_one_1dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: patient_id
dtype: int64
- name: drugName
dtype: string
- name: condition
dtype: string
- name: review
dtype: string
- name: rating
dtype: float64
- name: date
dtype: string
- name: usefulCount
dtype: int64
splits:
- name: test
num_bytes: 28367208
num_examples: 53471
- name: train
num_bytes: 85172055
num_examples: 160398
download_size: 63481104
dataset_size: 113539263
---
# Dataset Card for "drug_one_1dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Muennighoff | null | null | null | false | 2 | false | Muennighoff/P3 | 2022-11-03T15:15:39.000Z | null | false | 63f32b8f7bb300c1ac35e9146b38e7e2704c714d | [] | [
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"task_categories:other"
] | https://huggingface.co/datasets/Muennighoff/P3/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: P3
size_categories:
- 100M<n<1B
task_categories:
- other
---
This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
|
olm | null | null | null | false | 3 | false | olm/olm-CC-MAIN-2017-22-sampling-ratio-0.16178770949 | 2022-11-04T17:12:48.000Z | null | false | 5ec4fd478a40966b89315c2ad181766210c6a9d7 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"tags:pretraining",
"tags:language modelling",
"tags:common crawl",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-CC-MAIN-2017-22-sampling-ratio-0.16178770949/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM May 2017 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM May 2017 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the May 2017 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/magic_armor | 2022-10-25T23:27:11.000Z | null | false | 43e6c210364333a854e568c24324db3fd67875d8 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/magic_armor/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Magic Armor Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by magic_armor"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/3O5YpWT.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/icDlRiA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/AcrdSwB.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/hP923FH.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/RzSFggo.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
zZWipeoutZz | null | null | null | false | 1 | false | zZWipeoutZz/crusader_knight | 2022-10-26T00:47:13.000Z | null | false | d7837f0e3a1e66eaa1884e7a29c7a40ad5c76e0a | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/zZWipeoutZz/crusader_knight/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Disclosure </h4>
<p> this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">" art by crusader_knight "</em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>15,000</li>
<li>10,000</li>
<li>6500</li>
</ul>
cheers
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody><tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/jx0F3zi.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/HZkt3Nx.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/MLKhJXL.png"></td>
</tr>
</tbody>
</table>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.