author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LeandraFichtel | null | @inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
} | This dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge
from Wikidata and Wikipedia.. | false | 1 | false | LeandraFichtel/KAMEL | 2022-11-03T16:39:49.000Z | null | false | 7af12b091affeb6e55d0f4871dc98af83fabe28b | [] | [] | https://huggingface.co/datasets/LeandraFichtel/KAMEL/resolve/main/README.md | ---
# Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://github.com/JanKalo/KAMEL
- **Repository:**
https://github.com/JanKalo/KAMEL
- **Paper:**
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
### Dataset Summary
This dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge
from Wikidata and Wikipedia.
See the paper for more details. For more information, also see:
https://github.com/JanKalo/KAMEL
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
KAMEL has the following fields:
* index: the id
* sub_label: a label for the subject
* obj_uri: Wikidata uri for the object
* obj_labels: multiple labels for the object
* chosen_label: the preferred label
* rel_uri: Wikidata uri for the relation
* rel_label: a label for the relation
### Data Splits
The dataset is split into a training, validation, and test dataset.
It contains 234 Wikidata relations.
For each relation there exist 200 training, 100 validation,
and 100 test instances.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to explore what knowledge graph facts are memorized by large language models.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created from Wikidata and Wikipedia.
### Annotations
#### Annotation process
There is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles.
The details about the process can be found in the paper.
#### Who are the annotators?
Machine Annotations
### Personal and Sensitive Information
Unkown, but likely information about famous people mentioned in the English Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL.
## Additional Information
### Dataset Curators
The authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-fc121d-1975865996 | 2022-11-03T14:11:13.000Z | null | false | dfa2ec4ee00fcd57232b5edaa3e37a5ab1c0985e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-fc121d-1975865996/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ce107](https://huggingface.co/ce107) for evaluating this model. |
sileod | null | @article{sileo2022probing,
title={Probing neural language models for understanding of words of estimative probability},
author={Sileo, Damien and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2211.03358},
year={2022}
} | Probing neural language models for understanding of words of estimative probability | false | 3 | false | sileod/wep-probes | 2022-11-15T08:17:18.000Z | null | false | 2ca435dd969e7714405a7a514edd8b637964046a | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:open-domain-qa",... | https://huggingface.co/datasets/sileod/wep-probes/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'wep-probes'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- open-domain-qa
- multiple-choice-qa
- natural-language-inference
tags:
- wep
- words of estimative probability
- probability
- logical reasoning
- soft logic
---
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
```bib
@article{sileo2022probing,
title={Probing neural language models for understanding of words of estimative probability},
author={Sileo, Damien and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2211.03358},
year={2022}
}
``` |
popaqy | null | null | null | false | 2 | false | popaqy/my_dataset | 2022-11-03T14:27:55.000Z | null | false | 772d7f4015382026d97b6c8a2e477a8a3f1fbbc6 | [] | [] | https://huggingface.co/datasets/popaqy/my_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: bg
dtype: string
- name: en
dtype: string
- name: bg_wrong
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1792707
num_examples: 3442
download_size: 908032
dataset_size: 1792707
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cannlytics | null | null | null | false | null | false | cannlytics/cannabis_strains | 2022-11-03T15:03:08.000Z | null | false | 166086fcbdca991f9f39b9b20bb2157c0d29304e | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/cannlytics/cannabis_strains/resolve/main/README.md | ---
license: cc-by-4.0
---
|
loubnabnl | null | null | null | false | null | false | loubnabnl/pii_labeling_dataset_v2 | 2022-11-03T16:01:10.000Z | null | false | 461324e3df40ab624bebe0bf0e8a9a8cc6553714 | [] | [] | https://huggingface.co/datasets/loubnabnl/pii_labeling_dataset_v2/resolve/main/README.md | ---
dataset_info:
features:
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
- name: regex_metadata
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 4304335.035
num_examples: 445
download_size: 3569665
dataset_size: 4304335.035
---
# Dataset Card for "pii_labeling_dataset_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna | null | null | null | false | 1 | false | polinaeterna/test_push | 2022-11-03T16:23:44.000Z | null | false | 708e9f06c286a367cb4c3e11d4d0e48d8c005a79 | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
0: dir1
1: dir2
2: main
splits:
- name: train
num_bytes: 1361348.0
num_examples: 4
download_size: 982657
dataset_size: 1361348.0
---
# Dataset Card for "test_push"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LiveEvil | null | null | null | false | null | false | LiveEvil/Teshjsdf | 2022-11-03T16:16:47.000Z | null | false | a340e6425ffe90c222de7847a260d140bdb42fde | [] | [
"license:mit"
] | https://huggingface.co/datasets/LiveEvil/Teshjsdf/resolve/main/README.md | ---
license: mit
---
|
polinaeterna | null | null | null | false | 1 | false | polinaeterna/test_push2 | 2022-11-03T16:25:59.000Z | null | false | 8c0e551fae360bd8777c3c0476270efe78e54a1d | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push2/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
0: dir1
1: dir2
2: main
splits:
- name: train
num_bytes: 1361348.0
num_examples: 4
download_size: 982657
dataset_size: 1361348.0
---
# Dataset Card for "test_push2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl | null | null | null | false | null | false | loubnabnl/pii_labeling_pre_filter | 2022-11-03T17:07:43.000Z | null | false | c99dbed1091e98127390340bfbf218e0a3075183 | [] | [] | https://huggingface.co/datasets/loubnabnl/pii_labeling_pre_filter/resolve/main/README.md | ---
dataset_info:
features:
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
- name: regex_metadata
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 3869065.2
num_examples: 400
download_size: 1257731
dataset_size: 3869065.2
---
# Dataset Card for "pii_labeling_pre_filter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/FAERS-filenames-2022-11-03 | 2022-11-03T23:57:37.000Z | null | false | 60e344791d3d8eb3c1cf906fd4225f585dbaf8b8 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/FAERS-filenames-2022-11-03/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 1590
num_examples: 60
download_size: 0
dataset_size: 1590
---
# Dataset Card for "FAERS-filenames-2022-11-03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ccao | null | null | null | false | null | false | ccao/monkey | 2022-11-03T19:01:28.000Z | null | false | c1ed2fbed1fabcade984b7ce47c76b8f1e2559a6 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/ccao/monkey/resolve/main/README.md | ---
license: bsd
---
|
reactehr | null | null | null | false | null | false | reactehr/cardioSample | 2022-11-03T20:07:52.000Z | null | false | 4fd93625057952828fb7b7894ca5e9455d6429fd | [] | [] | https://huggingface.co/datasets/reactehr/cardioSample/resolve/main/README.md | annotations_creators:
- no-annotation
language:
- '''en'''
language_creators:
- other
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: cardiology reference notes
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-retrieval
- text-generation
- feature-extraction
task_ids:
- entity-linking-retrieval
- dialogue-modeling
|
fkdosilovic | null | null | null | false | 2 | false | fkdosilovic/docee-event-classification | 2022-11-03T21:39:31.000Z | null | false | 548191053344a231c016a74927e87fae9fef786d | [] | [
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:wiki",
"tags:news",
"tags:event-detection",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/fkdosilovic/docee-event-classification/resolve/main/README.md | ---
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: DocEE
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- wiki
- news
- event-detection
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for DocEE Dataset
## Dataset Description
- **Homepage:**
- **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee)
- **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/)
### Dataset Summary
DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.
### Data Fields
- `title`: TODO
- `text`: TODO
- `event_type`: TODO
- `date`: TODO
- `metadata`: TODO
**Note: this repo contains only event detection portion of the dataset.**
### Data Splits
The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.
#### Differences from the original split(s)
Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.
Originally, the `title` column additionally contained information from `date` and `metadata` columns. They are now separated into three columns: `date`, `metadata` and `title`. |
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/PubMed-filenames-2022-11-03 | 2022-11-04T01:01:02.000Z | null | false | 7202640e4cb093a387c16dbd8c248342118a1238 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/PubMed-filenames-2022-11-03/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 72410
num_examples: 1114
download_size: 0
dataset_size: 72410
---
# Dataset Card for "PubMed-filenames-2022-11-03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
riccardogiorato | null | null | null | false | null | false | riccardogiorato/beeple-everyday | 2022-11-03T21:12:57.000Z | null | false | 8b48d820c4bc9f34966fb2ee24f3adb783d20d88 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/riccardogiorato/beeple-everyday/resolve/main/README.md | ---
license: creativeml-openrail-m
---
# Dataset Card for Beeple Everyday
Dataset used to train [beeple-diffusion](https://huggingface.co/riccardogiorato/beeple-diffusion).
The original images were obtained from [twitter.com/beeple](https://twitter.com/beeple/media).
## Citation
If you use this dataset, please cite it as:
```
@misc{gioratobeeple-everyday,
author = {Riccardo, Giorato},
title = {Beeple Everyday},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/riccardogiorato/beeple-everyday/}}
}
```
|
LiveEvil | null | null | null | false | null | false | LiveEvil/RealSrry | 2022-11-03T21:41:04.000Z | null | false | 611f9f86637b91ddaa36a0ad60d7ebea0ab73ccf | [] | [
"license:other"
] | https://huggingface.co/datasets/LiveEvil/RealSrry/resolve/main/README.md | ---
license: other
---
|
LiveEvil | null | null | null | false | null | false | LiveEvil/RealTrain | 2022-11-03T21:45:06.000Z | null | false | d30422b378c7138835536625cd37dca0b29572ff | [] | [
"license:mit"
] | https://huggingface.co/datasets/LiveEvil/RealTrain/resolve/main/README.md | ---
license: mit
---
|
stauntonjr | null | null | null | false | 6 | false | stauntonjr/dtic_sent | 2022-11-03T23:37:08.000Z | null | false | b3187f53037e244e39c29606e357bdd411b46801 | [] | [] | https://huggingface.co/datasets/stauntonjr/dtic_sent/resolve/main/README.md | ---
dataset_info:
features:
- name: Accession Number
dtype: string
- name: Title
dtype: string
- name: Descriptive Note
dtype: string
- name: Corporate Author
dtype: string
- name: Personal Author(s)
sequence: string
- name: Report Date
dtype: string
- name: Pagination or Media Count
dtype: string
- name: Descriptors
sequence: string
- name: Subject Categories
dtype: string
- name: Distribution Statement
dtype: string
- name: fulltext
dtype: string
- name: cleantext
dtype: string
- name: sents
sequence: string
splits:
- name: train
num_bytes: 6951041151
num_examples: 27425
download_size: 3712549813
dataset_size: 6951041151
---
# Dataset Card for "dtic_sent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ju-resplande | null | null | null | false | null | false | ju-resplande/qa-pt | 2022-11-04T01:08:31.000Z | null | false | 71a04f8193fdbcf408a47d2a040902f3ef954438 | [] | [
"annotations_creators:no-annotation",
"language_creators:other",
"language:pt",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:extended|mqa",
"task_categories:question-answering",
"task_ids:multiple-choice-qa"
] | https://huggingface.co/datasets/ju-resplande/qa-pt/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- pt
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: qa-portuguese
size_categories:
- 10M<n<100M
source_datasets:
- extended|mqa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for QA-Portuguese
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/nixeu_style | 2022-11-03T23:36:01.000Z | null | false | 4acd51b06d689bf2d0cb95dce6b552909584e8ba | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/nixeu_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Nixeu Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by nixeu_style"```
Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Nixeu or Wlop models for best results
If it is to strong just add [] around it.
Trained until 8400 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/5Rg6a3N.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/oWqYTHL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/45GFoZf.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NU8Rc4z.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Yvl836l.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
mariopeng | null | null | null | false | null | false | mariopeng/openwebIPA | 2022-11-03T23:54:07.000Z | null | false | 563663e3d9cd595fc13750738c733d347117c796 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/mariopeng/openwebIPA/resolve/main/README.md | ---
license: openrail
---
|
camilacorreamelo | null | null | null | false | null | false | camilacorreamelo/camilacorreamelo | 2022-11-05T15:49:25.000Z | null | false | f4a05a82646d34a07f7a830e02a6eca0cc112e7f | [] | [] | https://huggingface.co/datasets/camilacorreamelo/camilacorreamelo/resolve/main/README.md | |
dalow24 | null | null | null | false | null | false | dalow24/testing | 2022-11-04T01:25:50.000Z | null | false | cec8cd7af9b951972b470c917802172d0398b1a7 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/dalow24/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
pr0godxxx | null | null | null | false | null | false | pr0godxxx/pc | 2022-11-06T06:58:58.000Z | null | false | c73c57d30b27b00f9b31ba76132e36aa403fa99a | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/pr0godxxx/pc/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: Pokémon BLIP captions
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20537.0
num_examples: 1
download_size: 21610
dataset_size: 20537.0
---
# Dataset Card for Pokémon BLIP captions
_Dataset used to train [Pokémon text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from [FastGAN-pytorch](https://github.com/odegeasslbc/FastGAN-pytorch) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> a drawing of a green pokemon with red eyes

> a green and yellow toy with a red nose

> a red and white ball with an angry look on its face
## Citation
If you use this dataset, please cite it as:
```
@misc{pinkney2022pokemon,
author = {Pinkney, Justin N. M.},
title = {Pokemon BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/}}
}
``` |
ktmeng | null | null | null | false | 29 | false | ktmeng/mec | 2022-11-04T05:40:39.000Z | null | false | 5cfd2faebc11c885a4b7fe7bc1507b0070824fd7 | [] | [
"license:mit"
] | https://huggingface.co/datasets/ktmeng/mec/resolve/main/README.md | ---
license: mit
---
|
lmqg | null | @inproceedings{du-cardie-2018-harvesting,
title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia",
author = "Du, Xinya and
Cardie, Claire",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1177",
doi = "10.18653/v1/P18-1177",
pages = "1907--1917",
abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.",
} | QA pairs generated in https://aclanthology.org/P18-1177/ | false | 11 | false | lmqg/qa_harvesting_from_wikipedia | 2022-11-05T03:19:40.000Z | null | false | 849be46ab60cfbd53a5bd950538253aecd6cea78 | [] | [
"license:cc-by-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:1M<",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia/resolve/main/README.md | ---
license: cc-by-4.0
pretty_name: Harvesting QA paris from Wikipedia.
language: en
multilinguality: monolingual
size_categories: 1M<
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://aclanthology.org/P18-1177/](https://aclanthology.org/P18-1177/)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the QA dataset collected by [Harvesting Paragraph-level Question-Answer Pairs from Wikipedia](https://aclanthology.org/P18-1177) (Du & Cardie, ACL 2018).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|test |
|--------:|---------:|-------:|
|1,204,925| 30,293| 24,473|
## Citation Information
```
@inproceedings{du-cardie-2018-harvesting,
title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia",
author = "Du, Xinya and
Cardie, Claire",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1177",
doi = "10.18653/v1/P18-1177",
pages = "1907--1917",
abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.",
}
``` |
KoziCreative | null | null | null | false | null | false | KoziCreative/Testing | 2022-11-04T09:31:40.000Z | null | false | f33015cbbb9603eafb301548bd4d43aad6354c64 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/KoziCreative/Testing/resolve/main/README.md | ---
license: afl-3.0
---
|
Ayush2609 | null | null | null | false | null | false | Ayush2609/auto_content | 2022-11-04T09:32:44.000Z | null | false | 7d5efeb7e157099ebd0f630628e64b1cdc97f6e2 | [] | [] | https://huggingface.co/datasets/Ayush2609/auto_content/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 25207.5885509839
num_examples: 503
- name: validation
num_bytes: 2806.4114490161
num_examples: 56
download_size: 19771
dataset_size: 28014.0
---
# Dataset Card for "auto_content"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | 282 | false | PartiallyTyped/answerable_tydiqa | 2022-11-04T09:45:10.000Z | null | false | cc540899103705a0cb87bea53bda71fa14a80737 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa/resolve/main/README.md | ---
dataset_info:
features:
- name: question_text
dtype: string
- name: document_title
dtype: string
- name: language
dtype: string
- name: annotations
struct:
- name: answer_start
sequence: int64
- name: answer_text
sequence: string
- name: document_plaintext
dtype: string
- name: document_url
dtype: string
splits:
- name: train
num_bytes: 32084629.326371837
num_examples: 29868
- name: validation
num_bytes: 3778385.324427767
num_examples: 3712
download_size: 16354337
dataset_size: 35863014.6507996
---
# Dataset Card for "answerable_tydiqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | 68 | false | PartiallyTyped/answerable_tydiqa_restructured | 2022-11-04T09:45:41.000Z | null | false | f71b7973349141cb8a3d40b6ee2797830f62ae68 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_restructured/resolve/main/README.md | ---
dataset_info:
features:
- name: language
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: references
struct:
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 21940019
num_examples: 29868
- name: validation
num_bytes: 2730209
num_examples: 3712
download_size: 17468684
dataset_size: 24670228
---
# Dataset Card for "answerable_tydiqa_restructured"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | 259 | false | PartiallyTyped/answerable_tydiqa_preprocessed | 2022-11-04T09:46:21.000Z | null | false | 90b5976050208f4ab764422c334b95dfd681e4f0 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_preprocessed/resolve/main/README.md | ---
dataset_info:
features:
- name: language
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: references
struct:
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 21252073.336011786
num_examples: 29800
- name: validation
num_bytes: 2657400.5792025863
num_examples: 3709
download_size: 16838253
dataset_size: 23909473.91521437
---
# Dataset Card for "answerable_tydiqa_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | 53 | false | PartiallyTyped/answerable_tydiqa_tokenized | 2022-11-04T09:47:12.000Z | null | false | b20f6950ca9773dac84e57b2f052cc9c3fcdf448 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_tokenized/resolve/main/README.md | ---
dataset_info:
features:
- name: language
dtype: string
- name: question
sequence: string
- name: context
sequence: string
- name: references
struct:
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
- name: id
dtype: string
- name: labels
dtype: bool
splits:
- name: train
num_bytes: 30320669
num_examples: 29800
- name: validation
num_bytes: 3761508
num_examples: 3709
download_size: 17981416
dataset_size: 34082177
---
# Dataset Card for "answerable_tydiqa_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/guweiz_style | 2022-11-04T10:14:19.000Z | null | false | 148e1cda53c9697ea386953a60e8493dbd102cb1 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/guweiz_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Guweiz Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by guweiz_style"```
If it is to strong just add [] around it.
Trained until 9000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/eCbB30e.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/U1Fezud.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/DqruJgs.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/O7VV7BS.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/k4sIsvH.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Lucapro | null | null | null | false | null | false | Lucapro/tx-data-to-decode | 2022-11-04T10:22:12.000Z | null | false | 60d8a487125ced60f6cd19e37aac3739d135b6b5 | [] | [] | https://huggingface.co/datasets/Lucapro/tx-data-to-decode/resolve/main/README.md | ---
dataset_info:
features:
- name: en
dtype: string
- name: de
dtype: string
splits:
- name: train
num_bytes: 3527858
num_examples: 6057
download_size: 995171
dataset_size: 3527858
---
# Dataset Card for "tx-data-to-decode"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MartinMu | null | null | null | false | null | false | MartinMu/SD-Training | 2022-11-04T10:49:52.000Z | null | false | b578d37c60f8311c642fb7b6838fadef45cdd2a0 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/MartinMu/SD-Training/resolve/main/README.md | ---
license: openrail
---
|
paweljp | null | null | null | false | null | false | paweljp/Tylercrimetime | 2022-11-04T11:02:28.000Z | null | false | e35a91f8e7cb3c201a7211b53219f0d8833f1a3d | [] | [
"license:unknown"
] | https://huggingface.co/datasets/paweljp/Tylercrimetime/resolve/main/README.md | ---
license: unknown
---
|
rjac | null | null | null | false | 17 | false | rjac/icd10-reference-cm | 2022-11-04T11:23:29.000Z | null | false | 626de4a1bf832412aed03cd731b74bc5ac978fcb | [] | [] | https://huggingface.co/datasets/rjac/icd10-reference-cm/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: icd10_tc_category
dtype: string
- name: icd10_tc_category_group
dtype: string
splits:
- name: train
num_bytes: 13286095
num_examples: 71480
download_size: 2715065
dataset_size: 13286095
---
# Dataset Card for "icd10-reference-cm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MarkGG | null | null | null | false | 18 | false | MarkGG/Pierse-movie-dataset | 2022-11-04T11:35:26.000Z | null | false | 587e3170fcb95d51295acfea053c6570cedd8a41 | [] | [] | https://huggingface.co/datasets/MarkGG/Pierse-movie-dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 53518991.51408206
num_examples: 1873138
- name: validation
num_bytes: 5946570.485917939
num_examples: 208127
download_size: 33525659
dataset_size: 59465562.0
---
# Dataset Card for "Pierse-movie-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466167 | 2022-11-04T13:22:24.000Z | null | false | 7f5cd8bfac9cee6eb3a88ba576779a76c30bf806 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:joelito/brazilian_court_decisions"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466167/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- joelito/brazilian_court_decisions
eval_info:
task: multi_class_classification
model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions
metrics: []
dataset_name: joelito/brazilian_court_decisions
dataset_config: joelito--brazilian_court_decisions
dataset_split: test
col_mapping:
text: decision_description
target: judgment_label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466168 | 2022-11-04T13:22:29.000Z | null | false | 04201c6a1a1cb7f50160ab3b0e0a7a630bef5463 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:joelito/brazilian_court_decisions"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466168/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- joelito/brazilian_court_decisions
eval_info:
task: multi_class_classification
model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions
metrics: []
dataset_name: joelito/brazilian_court_decisions
dataset_config: joelito--brazilian_court_decisions
dataset_split: test
col_mapping:
text: decision_description
target: judgment_label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
polinaeterna | null | null | null | false | null | false | polinaeterna/test_splits_order | 2022-11-04T13:30:57.000Z | null | false | 46f712c7d0dbfb4aaa83bdce8c4f9a4c2f080e69 | [] | [] | https://huggingface.co/datasets/polinaeterna/test_splits_order/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 32
num_examples: 2
- name: train
num_bytes: 48
num_examples: 2
download_size: 1776
dataset_size: 80
---
# Dataset Card for "test_splits_order"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marianna13 | null | null | null | false | null | false | marianna13/laion2B-multi-joined-translated-to-en-hr | 2022-11-07T14:10:48.000Z | null | false | 06ed218989fe8d663592ac82d4b1a2118e0ee2bd | [] | [] | https://huggingface.co/datasets/marianna13/laion2B-multi-joined-translated-to-en-hr/resolve/main/README.md | ---
license: cc-by-4.0
---
|
polinaeterna | null | null | null | false | null | false | polinaeterna/test_splits | 2022-11-04T13:59:01.000Z | null | false | 0a118a6d943dba991d968c909121d7e231f968f0 | [] | [] | https://huggingface.co/datasets/polinaeterna/test_splits/resolve/main/README.md | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
num_bytes: 116
num_examples: 8
- name: test
num_bytes: 46
num_examples: 3
download_size: 1698
dataset_size: 162
---
# Dataset Card for "test_splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yubing | null | null | null | false | null | false | Yubing/Ubin | 2022-11-04T14:23:56.000Z | null | false | 6307437ad30f1172d69671dd1380e8d652c1fd0e | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Yubing/Ubin/resolve/main/README.md | ---
license: openrail
---
|
VXX | null | null | null | false | null | false | VXX/sd_images | 2022-11-08T08:29:46.000Z | null | false | 1151784096a8e009fd8cf9b614759f05adc5071a | [] | [
"license:openrail"
] | https://huggingface.co/datasets/VXX/sd_images/resolve/main/README.md | ---
license: openrail
---
|
echogecko | null | null | null | false | null | false | echogecko/molly | 2022-11-04T14:36:37.000Z | null | false | 6cd12e75db5b54753dc7a1ef66f4fef854307edb | [] | [] | https://huggingface.co/datasets/echogecko/molly/resolve/main/README.md | |
marianna13 | null | null | null | false | null | false | marianna13/laion2B-multi-joined-translated-to-en-ultra-hr | 2022-11-07T14:26:15.000Z | null | false | 8e413a6829a1f3d83de7c898850c5b92690c9b3f | [] | [] | https://huggingface.co/datasets/marianna13/laion2B-multi-joined-translated-to-en-ultra-hr/resolve/main/README.md | ---
license: cc-by-4.0
---
|
assq | null | null | null | false | null | false | assq/11 | 2022-11-04T15:13:01.000Z | null | false | ef320d1bb821f7a1cbc1e029f7e930faae59ff6c | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/assq/11/resolve/main/README.md | ---
license: cc0-1.0
---
|
duyngtr16061999 | null | null | null | false | 4 | false | duyngtr16061999/pokemon_fashion_mixed | 2022-11-04T16:21:57.000Z | null | false | 0ff5ded4caccbfeb631f5f70ea3e19a773e0004e | [] | [] | https://huggingface.co/datasets/duyngtr16061999/pokemon_fashion_mixed/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
LiveEvil | null | null | null | false | null | false | LiveEvil/WannaCryBlock | 2022-11-04T15:47:08.000Z | null | false | ca5374f76ac0bd2208713ad7d9b37bc7f99aed1e | [] | [
"license:mit"
] | https://huggingface.co/datasets/LiveEvil/WannaCryBlock/resolve/main/README.md | ---
license: mit
---
|
LiveEvil | null | null | null | false | 4 | false | LiveEvil/autotrain-data-wannacryblock | 2022-11-04T15:50:47.000Z | null | false | 8eba08313dc9214ae16b72c9bba3f4397873dce3 | [] | [
"language:en"
] | https://huggingface.co/datasets/LiveEvil/autotrain-data-wannacryblock/resolve/main/README.md | ---
language:
- en
---
# AutoTrain Dataset for project: wannacryblock
## Dataset Description
This dataset has been automatically processed by AutoTrain for project wannacryblock.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "What is developing?",
"question": "Developing is the process of building an app.",
"answers.text": [
"15"
],
"answers.answer_start": [
105
],
"feat___index_level_0__": [
"Developing is the process of building an app or web application through multiple files and lines of code."
]
},
{
"context": "What is an API?",
"question": "It is used to control the functions of one app through another app.",
"answers.text": [
"51"
],
"answers.answer_start": [
117
],
"feat___index_level_0__": [
"API stands for Application Programming Interface. It is used to control the functions of one app through another app."
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat___index_level_0__": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 14 |
| valid | 4 |
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-103f11-1986766201 | 2022-11-04T15:49:57.000Z | null | false | 0308f18780cb95bcb0625b1d0fa798c15d3aa250 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-103f11-1986766201/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: ['recall', 'precision']
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MauritsG](https://huggingface.co/MauritsG) for evaluating this model. |
LiveEvil | null | null | null | false | 1 | false | LiveEvil/MyClass | 2022-11-04T16:02:41.000Z | null | false | 33997c7bd85c09f16d8f1ccfe8daae52d4378a4a | [] | [
"license:mit"
] | https://huggingface.co/datasets/LiveEvil/MyClass/resolve/main/README.md | ---
license: mit
---
|
marianna13 | null | null | null | false | null | false | marianna13/laion1B-nolang-joined-translated-to-en-hr | 2022-11-07T13:37:23.000Z | null | false | a0b5c74c5522a35f19c88c46b8310c32a8f17761 | [] | [] | https://huggingface.co/datasets/marianna13/laion1B-nolang-joined-translated-to-en-hr/resolve/main/README.md | ---
license: cc-by-4.0
---
|
marianna13 | null | null | null | false | null | false | marianna13/laion1B-nolang-joined-translated-to-en-ultra-hr | 2022-11-04T16:40:36.000Z | null | false | f64c63247c266a97e92092e7906050cf9f6f6b02 | [] | [] | https://huggingface.co/datasets/marianna13/laion1B-nolang-joined-translated-to-en-ultra-hr/resolve/main/README.md | ---
license: cc-by-4.0
---
|
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/PubMed-filenames-2022-11-04 | 2022-11-04T17:13:01.000Z | null | false | 4be09d8735065a6c27c0d3fb70c9e0cde538d8e2 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/PubMed-filenames-2022-11-04/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 72410
num_examples: 1114
download_size: 0
dataset_size: 72410
---
# Dataset Card for "PubMed-filenames-2022-11-04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marianna13 | null | null | null | false | null | false | marianna13/improved_aesthetics_4.5plus-ultra-hr | 2022-11-07T14:50:02.000Z | null | false | adb84a6881e297ec2c9df51d56902781a25cf6e5 | [] | [] | https://huggingface.co/datasets/marianna13/improved_aesthetics_4.5plus-ultra-hr/resolve/main/README.md | ---
license: apache-2.0
---
|
roydcarlson | null | null | null | false | 15 | false | roydcarlson/dirt_teff2 | 2022-11-04T17:28:50.000Z | null | false | c4c55382a58a997f57ff1100eff6696d1574204d | [] | [] | https://huggingface.co/datasets/roydcarlson/dirt_teff2/resolve/main/README.md | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 6436424.0
num_examples: 7
download_size: 6352411
dataset_size: 6436424.0
---
# Dataset Card for "dirt_teff2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LiveEvil | null | null | null | false | 1 | false | LiveEvil/LetMeE | 2022-11-04T18:07:01.000Z | null | false | de4b6a7d716fead381ca0525bf7488c237ca09c4 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LiveEvil/LetMeE/resolve/main/README.md | ---
license: openrail
---
|
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/FAERS-filenames-2022-11-04 | 2022-11-04T18:31:35.000Z | null | false | d27376f5dbb3e4196348e359d6c1da3dc4049758 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/FAERS-filenames-2022-11-04/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 1590
num_examples: 60
download_size: 0
dataset_size: 1590
---
# Dataset Card for "FAERS-filenames-2022-11-04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roydcarlson | null | null | null | false | 1 | false | roydcarlson/sidewalk-imagery2 | 2022-11-04T18:41:17.000Z | null | false | f2675b210a774ec7e8116c38acb39e724f101ea4 | [] | [] | https://huggingface.co/datasets/roydcarlson/sidewalk-imagery2/resolve/main/README.md | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3138394.0
num_examples: 10
download_size: 3139599
dataset_size: 3138394.0
---
# Dataset Card for "sidewalk-imagery2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
codysoccerman | null | null | null | false | 11 | false | codysoccerman/my_test_dataset | 2022-11-04T20:05:58.000Z | null | false | 10428c8e92b6b9bdbc4fb1c006d0e9e322fb4cb3 | [] | [] | https://huggingface.co/datasets/codysoccerman/my_test_dataset/resolve/main/README.md | hjvjjv |
BestManOnEarth | null | null | null | false | null | false | BestManOnEarth/dataset01 | 2022-11-04T20:23:20.000Z | null | false | 380434e3076631fced1ab7db82568a079c295764 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/BestManOnEarth/dataset01/resolve/main/README.md | ---
license: afl-3.0
---
|
InstantD | null | null | null | false | null | false | InstantD/PathfinderKobold | 2022-11-04T23:17:55.000Z | null | false | 3817af36979322cdbbbd8896baafbf248198878c | [] | [] | https://huggingface.co/datasets/InstantD/PathfinderKobold/resolve/main/README.md | |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/land_style | 2022-11-12T14:42:39.000Z | null | false | 31d3a08d5af6c0eb87e822ae146b14955d8453e0 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/land_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Landscape Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File: ```land_style```
To use it in a prompt: ```"art by land_style"```
For best use write something like ```highly detailed background art by land_style```
### Version 2:
File: ```landscape_style```
To use it in a prompt: ```"art by landscape_style"```
For best use write something like ```highly detailed background art by landscape_style```
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
## Example Pictures
<img src=https://i.imgur.com/UjoXFkJ.png width=100% height=100%/>
<img src=https://i.imgur.com/rAoEyLK.png width=100% height=100%/>
<img src=https://i.imgur.com/SpPsc7i.png width=100% height=100%/>
<img src=https://i.imgur.com/zMH0EeI.png width=100% height=100%/>
<img src=https://i.imgur.com/iQe0Jxc.png width=100% height=100%/>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
MarkGG | null | null | null | false | 16 | false | MarkGG/Romance-baseline | 2022-11-05T01:05:46.000Z | null | false | 55f1c09dcca698cd7015ff37b35ee2e136df6797 | [] | [] | https://huggingface.co/datasets/MarkGG/Romance-baseline/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 39176840.7
num_examples: 1105002
- name: validation
num_bytes: 4352982.3
num_examples: 122778
download_size: 23278822
dataset_size: 43529823.0
---
# Dataset Card for "Romance-baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iejMac | null | null | null | false | null | false | iejMac/CLIP-MSVD | 2022-11-05T02:19:16.000Z | null | false | 872974844b7d454a4e1fb0730de79149e7f7d826 | [] | [
"license:mit"
] | https://huggingface.co/datasets/iejMac/CLIP-MSVD/resolve/main/README.md | ---
license: mit
---
|
lmqg | null | @inproceedings{miller2020effect,
title={The effect of natural distribution shift on question answering models},
author={Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig},
booktitle={International Conference on Machine Learning},
pages={6905--6916},
year={2020},
organization={PMLR}
} | [SQuAD Shifts](https://modestyachts.github.io/squadshifts-website/index.html) dataset for question answering task with custom split. | false | 65 | false | lmqg/qa_squadshifts | 2022-11-05T05:10:26.000Z | null | false | 7b8b77e8fdeb334e3550d1fb6167d4cc92dc6957 | [] | [
"arxiv:2004.14444",
"license:cc-by-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/lmqg/qa_squadshifts/resolve/main/README.md | ---
license: cc-by-4.0
pretty_name: SQuADShifts
language: en
multilinguality: monolingual
size_categories: 1k<n<10k
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2004.14444](https://arxiv.org/abs/2004.14444)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is SQuADShifts dataset with custom split of training/validation/test following [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
| name |train | valid | test |
|-------------|------:|------:|-----:|
|default (all)|9209|6283 |18,844|
| amazon |3295|1648|4942|
| new_wiki |2646|1323|3969|
| nyt |3355|1678|5032|
| reddit |3268|1634|4901|
## Citation Information
```
@inproceedings{miller2020effect,
title={The effect of natural distribution shift on question answering models},
author={Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig},
booktitle={International Conference on Machine Learning},
pages={6905--6916},
year={2020},
organization={PMLR}
}
``` |
henryscheible | null | null | null | false | 11 | false | henryscheible/winobias | 2022-11-05T05:11:25.000Z | null | false | 6f41e1fff033457ae09c882a845a548a1c99ddba | [] | [] | https://huggingface.co/datasets/henryscheible/winobias/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: eval
num_bytes: 230400
num_examples: 1584
- name: train
num_bytes: 226080
num_examples: 1584
download_size: 83948
dataset_size: 456480
---
# Dataset Card for "winobias"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svjack | null | null | null | false | 6 | false | svjack/diffusiondb_random_10k | 2022-11-05T06:42:29.000Z | null | false | 3441c9e1f9d053e02e451d65b5e9cbd91759b6c6 | [] | [] | https://huggingface.co/datasets/svjack/diffusiondb_random_10k/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: seed
dtype: int64
- name: step
dtype: int64
- name: cfg
dtype: float32
- name: sampler
dtype: string
splits:
- name: train
num_bytes: 6221323762.0
num_examples: 10000
download_size: 5912620994
dataset_size: 6221323762.0
---
# Dataset Card for "diffusiondb_random_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966288 | 2022-11-05T09:08:51.000Z | null | false | f5e692026a34569c12e41c76f8d454fd9656f041 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966288/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/roberta-base-squad2
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966289 | 2022-11-05T09:10:19.000Z | null | false | 0d4919bac6e97e65c5770de6df0c068c6668c1a8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966289/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: abhilash1910/albert-squad-v2
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: abhilash1910/albert-squad-v2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966290 | 2022-11-05T09:09:12.000Z | null | false | 7d1d7bfc1ce0bc6e4232a162fa62f4bd9fac84aa | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966290/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/bert-base-cased-squad2
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-cased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966291 | 2022-11-05T09:09:17.000Z | null | false | d3977836565f67db67cf3c73acff318889fe1fb8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966291/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/bert-base-uncased-squad2
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966292 | 2022-11-05T09:08:45.000Z | null | false | 7ea37d0dd1563d17ca76bbbd94870d0c2ecae6d0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966292/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: distilbert-base-cased-distilled-squad
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: distilbert-base-cased-distilled-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966293 | 2022-11-05T09:09:32.000Z | null | false | 5910f37a9ea67db63f742fab701c7f58fa9f2878 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966293/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/electra-base-squad2
metrics: ['accuracy', 'bleu', 'precision', 'recall', 'rouge']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. |
BasToTheMax | null | null | null | false | null | false | BasToTheMax/dttm | 2022-11-05T11:06:36.000Z | null | false | 56cfd9e05f4c648e040623c7a3dfd994b41a6370 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/BasToTheMax/dttm/resolve/main/README.md | ---
license: creativeml-openrail-m
---
Hello!
DiffusionToTheMax (dttm) is a free to use dataset of millions of images from Stable Diffusion.
|
JoBeer | null | null | null | false | 22 | false | JoBeer/eclassCorpus | 2022-11-05T11:27:35.000Z | null | false | 9ba4d51fdf5843eba79fcfa63a2fd74c19272e26 | [] | [] | https://huggingface.co/datasets/JoBeer/eclassCorpus/resolve/main/README.md | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: did
dtype: int64
- name: query
dtype: string
- name: name
dtype: string
- name: datatype
dtype: string
- name: unit
dtype: string
- name: IRDI
dtype: string
- name: metalabel
dtype: int64
splits:
- name: train
num_bytes: 142519
num_examples: 672
download_size: 0
dataset_size: 142519
---
# Dataset Card for "eclassCorpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JoBeer | null | null | null | false | 22 | false | JoBeer/eclassQuery | 2022-11-05T11:27:48.000Z | null | false | 0f162129030855fdcd30cb80a79ec4310f839ffa | [] | [] | https://huggingface.co/datasets/JoBeer/eclassQuery/resolve/main/README.md | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: did
dtype: int64
- name: query
dtype: string
- name: name
dtype: string
- name: duplicate_id
dtype: int64
- name: metalabel
dtype: int64
splits:
- name: eval
num_bytes: 106836
num_examples: 672
- name: train
num_bytes: 158066
num_examples: 1059
download_size: 121201
dataset_size: 264902
---
# Dataset Card for "eclassQuery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maderix | null | null | null | false | 18 | false | maderix/farsidecomics-blip-captions | 2022-11-05T11:29:49.000Z | null | false | b0f8f64e6d681f84caa925de86b77e2a61f47903 | [] | [] | https://huggingface.co/datasets/maderix/farsidecomics-blip-captions/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 37767218.0
num_examples: 354
download_size: 37175120
dataset_size: 37767218.0
---
# Dataset Card for "farsidecomics-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svjack | null | null | null | false | 4 | false | svjack/diffusiondb_random_10k_zh_v1 | 2022-11-08T04:08:23.000Z | null | false | 0e804efcc3d6ef4934e925e9ffc7d73f8d33f194 | [] | [
"annotations_creators:machine-generated",
"language:en",
"language:zh",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:10K"
] | https://huggingface.co/datasets/svjack/diffusiondb_random_10k_zh_v1/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
- zh
language_creators:
- other
multilinguality:
- multilingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- 10K
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: seed
dtype: int64
- name: step
dtype: int64
- name: cfg
dtype: float32
- name: sampler
dtype: string
- name: zh_prompt
dtype: string
splits:
- name: train
num_bytes: 5826763233.4353
num_examples: 9841
download_size: 5829710525
dataset_size: 5826763233.4353
---
# Dataset Card for "diffusiondb_random_10k_zh_v1"
svjack/diffusiondb_random_10k_zh_v1 is a dataset that random sample 10k English samples from [diffusiondb](https://github.com/poloclub/diffusiondb) and use [NMT](https://en.wikipedia.org/wiki/Neural_machine_translation) translate them into Chinese with some corrections.<br/>
it used to train stable diffusion models in <br/> [svjack/Stable-Diffusion-FineTuned-zh-v0](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v0)<br/>
[svjack/Stable-Diffusion-FineTuned-zh-v1](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v1)<br/>
[svjack/Stable-Diffusion-FineTuned-zh-v2](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v2)<br/>
And is the data support of [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend) which is a fine tune version of Stable Diffusion model on self-translate 10k diffusiondb Chinese Corpus and "extend" it.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KheemDH | null | null | null | false | 7 | false | KheemDH/data | 2022-11-05T14:28:14.000Z | null | false | 8394ef7a7ccc5b2028f473be68097fc853febed0 | [] | [
"annotations_creators:other",
"language:en",
"language_creators:other",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-analysis"
] | https://huggingface.co/datasets/KheemDH/data/resolve/main/README.md | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- other
multilinguality:
- monolingual
pretty_name: data
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-analysis
---
|
galman33 | null | null | null | false | 24 | false | galman33/gal_yair_8300_1664x832 | 2022-11-05T14:54:09.000Z | null | false | ced75dce72ba1810bd050272470b07b1db519ebc | [] | [] | https://huggingface.co/datasets/galman33/gal_yair_8300_1664x832/resolve/main/README.md | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1502268207.4
num_examples: 8300
download_size: 1410808567
dataset_size: 1502268207.4
---
# Dataset Card for "gal_yair_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MJTidmarsh | null | null | null | false | null | false | MJTidmarsh/Goon3_Test | 2022-11-05T14:16:55.000Z | null | false | 7c31b58d1155a7597a693edaccb7fef7605a9b60 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/MJTidmarsh/Goon3_Test/resolve/main/README.md | ---
license: afl-3.0
---
|
NeelNanda | null | null | null | false | 2 | false | NeelNanda/counterfact-tracing | 2022-11-05T15:19:43.000Z | null | false | c945b082ca08d0a8f3ba227fb78404a09614c36e | [] | [
"arxiv:2211.00593"
] | https://huggingface.co/datasets/NeelNanda/counterfact-tracing/resolve/main/README.md | ---
dataset_info:
features:
- name: relation
dtype: string
- name: relation_prefix
dtype: string
- name: relation_suffix
dtype: string
- name: prompt
dtype: string
- name: relation_id
dtype: string
- name: target_false_id
dtype: string
- name: target_true_id
dtype: string
- name: target_true
dtype: string
- name: target_false
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 3400668
num_examples: 21919
download_size: 1109314
dataset_size: 3400668
---
# Dataset Card for "counterfact-tracing"
This is adapted from the counterfact dataset from the excellent [ROME paper](https://rome.baulab.info/) from David Bau and Kevin Meng.
This is a dataset of 21919 factual relations, formatted as `data["prompt"]==f"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}"`. Each has two responses `data["target_true"]` and `data["target_false"]` which is intended to go immediately after the prompt.
The dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence).
Each fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent [Interpretability In the Wild](https://arxiv.org/abs/2211.00593) paper). |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-indonli-indonli-717ea6-1995866375 | 2022-11-05T18:26:33.000Z | null | false | 9ce26cfd13b8a40a09229eb582d654bf774c11cb | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:indonli"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-indonli-indonli-717ea6-1995866375/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- indonli
eval_info:
task: natural_language_inference
model: w11wo/indonesian-roberta-base-indonli
metrics: []
dataset_name: indonli
dataset_config: indonli
dataset_split: test_expert
col_mapping:
text1: premise
text2: hypothesis
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: w11wo/indonesian-roberta-base-indonli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@afaji](https://huggingface.co/afaji) for evaluating this model. |
LiveEvil | null | null | null | false | null | false | LiveEvil/ImRealSrry | 2022-11-05T18:29:40.000Z | null | false | 423016fea124ff2ab30f5d8d3a6f19bb3d27e0a6 | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/LiveEvil/ImRealSrry/resolve/main/README.md | ---
license: bigscience-openrail-m
---
|
LiveEvil | null | null | null | false | 6 | false | LiveEvil/autotrain-data-imrealsrry | 2022-11-05T23:33:53.000Z | null | false | d6e6322f504d3df161199c9d7e9a52b0a2a150c5 | [] | [
"language:en"
] | https://huggingface.co/datasets/LiveEvil/autotrain-data-imrealsrry/resolve/main/README.md | ---
language:
- en
---
# AutoTrain Dataset for project: imrealsrry
## Dataset Description
This dataset has been automatically processed by AutoTrain for project imrealsrry.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Developing is the process of building an app or web application through multiple files and lines of code.",
"question": "What is developing?",
"answers.text": [
"Developing is the process of building an app.",
"Developing is the process of building an app."
],
"answers.answer_start": [
15,
15
]
},
{
"context": "Python is a very versatile coding language, you can use it for almost anything.",
"question": "How can I use Python?",
"answers.text": [
"Python is a very versatile coding language, you can use it for almost anything.",
"Python is a very versatile coding language, you can use it for almost anything."
],
"answers.answer_start": [
0,
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7 |
| valid | 2 |
|
qanastek | null | @article{10.1093/bioinformatics/btx238,
author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan},
title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}",
journal = {Bioinformatics},
volume = {33},
number = {14},
pages = {i49-i58},
year = {2017},
month = {07},
abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btx238},
url = {https://doi.org/10.1093/bioinformatics/btx238},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf},
} | BIOSSES is a benchmark dataset for biomedical sentence similarity estimation.
The dataset comprises 100 sentence pairs, in which each sentence was selected
from the TAC (Text Analysis Conference) Biomedical Summarization Track Training
Dataset containing articles from the biomedical domain. The sentence pairs in
BIOSSES were selected from citing sentences, i.e. sentences that have a citation
to a reference article.
The sentence pairs were evaluated by five different human experts that judged
their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent).
In the original paper the mean of the scores assigned by the five human annotators
was taken as the gold standard. The Pearson correlation between the gold standard
scores and the scores estimated by the models was used as the evaluation metric.
The strength of correlation can be assessed by the general guideline proposed by
Evans (1996) as follows:
very strong: 0.80–1.00
strong: 0.60–0.79
moderate: 0.40–0.59
weak: 0.20–0.39
very weak: 0.00–0.19 | false | 11 | false | qanastek/Biosses-BLUE | 2022-11-05T23:23:58.000Z | biosses | false | 357bc4f6af754b70dfbb6ced6f48e9728baa8e0d | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring"
] | https://huggingface.co/datasets/qanastek/Biosses-BLUE/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: biosses
pretty_name: BIOSSES
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
splits:
- name: train
num_bytes: 32783
num_examples: 100
download_size: 36324
dataset_size: 32783
---
# Dataset Card for BIOSSES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com)
### Dataset Summary
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
### Data Splits (From BLUE Benchmark)
|name|Train|Dev|Test|
|:--:|:--:|:--:|:--:|
|biosses|64|16|20|
### Supported Tasks and Leaderboards
Biomedical Semantic Similarity Scoring.
### Languages
English.
## Dataset Structure
### Data Instances
For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).
```json
{
"id": "0",
"sentence1": "Centrosomes increase both in size and in microtubule-nucleating capacity just before mitotic entry.",
"sentence2": "Functional studies showed that, when introduced into cell lines, miR-146a was found to promote cell proliferation in cervical cancer cells, which suggests that miR-146a works as an oncogenic miRNA in these cancers.",
"score": 0.0
}
```
### Data Fields
- `sentence 1`: string
- `sentence 2`: string
- `score`: float ranging from 0 (no relation) to 4 (equivalent)
## Dataset Creation
### Curation Rationale
### Source Data
The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/).
### Annotations
#### Annotation process
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.
| |Correlation r |
|----------:|--------------:|
|Annotator A| 0.952|
|Annotator B| 0.958|
|Annotator C| 0.917|
|Annotator D| 0.902|
|Annotator E| 0.941|
## Additional Information
### Dataset Curators
- Gizem Soğancıoğlu, gizemsogancioglu@gmail.com
- Hakime Öztürk, hakime.ozturk@boun.edu.tr
- Arzucan Özgür, gizemsogancioglu@gmail.com
Bogazici University, Istanbul, Turkey
### Licensing Information
BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
### Citation Information
```bibtex
@article{10.1093/bioinformatics/btx238,
author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan},
title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}",
journal = {Bioinformatics},
volume = {33},
number = {14},
pages = {i49-i58},
year = {2017},
month = {07},
abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btx238},
url = {https://doi.org/10.1093/bioinformatics/btx238},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf},
}
```
### Contributions
Thanks to [@qanastek](https://github.com/qanastek) for adding this dataset.
|
ChaiML | null | null | null | false | 301 | false | ChaiML/food_reviews | 2022-11-05T19:28:13.000Z | null | false | 559d6ac98dadc9d33f03e7293319ec6c4247e835 | [] | [] | https://huggingface.co/datasets/ChaiML/food_reviews/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 82388026
num_examples: 568455
download_size: 50550760
dataset_size: 82388026
---
# Dataset Card for "food_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flamesbob | null | null | null | false | null | false | flamesbob/Duality_style | 2022-11-05T20:36:53.000Z | null | false | e9bad8693d5b42ddab7e1c15f2b5524680c5efb2 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/flamesbob/Duality_style/resolve/main/README.md | ---
license: creativeml-openrail-m
---
`duality_style, art by duality_style` this will give a monochrome, wings/feathers, flowers, and opposite reflection look.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here |
ansondotdesign | null | null | null | false | null | false | ansondotdesign/roku | 2022-11-05T21:00:09.000Z | null | false | 629e8a3f87e9ef4a9ec7d157e2946951c17983b2 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ansondotdesign/roku/resolve/main/README.md | ---
license: afl-3.0
---
|
nielsr | null | null | null | false | 53 | false | nielsr/ade20k-panoptic-demo | 2022-11-06T17:13:22.000Z | null | false | 545e82b4d2819a24aae1ff54048ecf98b7b28231 | [] | [] | https://huggingface.co/datasets/nielsr/ade20k-panoptic-demo/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: segments_info
list:
- name: area
dtype: int64
- name: bbox
sequence: int64
- name: category_id
dtype: int64
- name: id
dtype: int64
- name: iscrowd
dtype: int64
splits:
- name: train
num_bytes: 492746.0
num_examples: 10
- name: validation
num_bytes: 461402.0
num_examples: 10
download_size: 949392
dataset_size: 954148.0
---
# Dataset Card for "ade20k-panoptic-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/lands_between | 2022-11-12T15:02:39.000Z | null | false | 8bdd59805ec01cc3920d42a7633083e4dea28265 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/lands_between/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Lands Between Elden Ring Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File: ```lands_between```
To use it in a prompt: ```"art by lands_between"```
For best use write something like ```highly detailed background art by lands_between```
### Version 2:
File: ```elden_ring```
To use it in a prompt: ```"art by elden_ring"```
For best use write something like ```highly detailed background art by elden_ring```
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Pajrsvy.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Bly3NJi.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/IxLNgB6.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/6rJ5ppD.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ueTEHtb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/dlVIwXs.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
galman33 | null | null | null | false | 1 | false | galman33/gal_yair_83000_1664x832 | 2022-11-07T16:16:17.000Z | null | false | b1743a3eb280777e999ff98f0c9f00361b4042b2 | [] | [] | https://huggingface.co/datasets/galman33/gal_yair_83000_1664x832/resolve/main/README.md | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 12963511218.0
num_examples: 83000
download_size: 14150729267
dataset_size: 12963511218.0
---
# Dataset Card for "gal_yair_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LiveEvil | null | null | null | false | 8 | false | LiveEvil/Im | 2022-11-10T17:20:25.000Z | null | false | 7603c0da12be1c4f630020fe27db2d972a5793f1 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LiveEvil/Im/resolve/main/README.md | ---
license: openrail
---
|
ebeaulac | null | null | null | false | 2 | false | ebeaulac/adj-n0ed8tdx-800-150-3 | 2022-11-05T23:38:13.000Z | null | false | c0179e1d7304760d33b8fe4985288ea6d025eea2 | [] | [] | https://huggingface.co/datasets/ebeaulac/adj-n0ed8tdx-800-150-3/resolve/main/README.md | ---
dataset_info:
features:
- name: matrix
sequence:
sequence: float64
- name: is_adjacent
dtype: bool
splits:
- name: train
num_bytes: 55909792
num_examples: 1600
- name: valid
num_bytes: 10444854
num_examples: 300
download_size: 48159452
dataset_size: 66354646
---
# Dataset Card for "adj-n0ed8tdx-800-150-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ebeaulac | null | null | null | false | 1 | false | ebeaulac/adj-n0ed8tdx-800-150-10 | 2022-11-06T00:09:01.000Z | null | false | be5ccd50c1a5b6a629bfeead07d335977b77096a | [] | [] | https://huggingface.co/datasets/ebeaulac/adj-n0ed8tdx-800-150-10/resolve/main/README.md | ---
dataset_info:
features:
- name: matrix
sequence:
sequence: float64
- name: is_adjacent
dtype: bool
splits:
- name: train
num_bytes: 5311464
num_examples: 1600
- name: valid
num_bytes: 993502
num_examples: 300
download_size: 4985370
dataset_size: 6304966
---
# Dataset Card for "adj-n0ed8tdx-800-150-10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IkariDev | null | null | null | false | null | false | IkariDev/megumin_embedding | 2022-11-06T02:39:54.000Z | null | false | b22034546d43f5c4eb182381cdb97dbab8e29406 | [] | [
"language:en",
"Tags:stable-diffusion",
"Tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/IkariDev/megumin_embedding/resolve/main/README.md | ---
language:
- en
Tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Megumin Embedding / Textual Inversion
# The embedding uses NSFW pictures as a dataset!
## Usage
To use this embedding you have to download the file as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"megumin_embedding"```
For best use write something like ```(megumin_style:1.1), messy hair, 1girl, smooth, smooth shading, brown hair, short hair, red eyes, long sideburns, sidelocks, volumetric lighting, slim lips, cinematic lighting, black choker```
If it is too strong add [] around it.
Trained until 10000 steps
Have fun :)
<details>
<summary>Example pictures</summary>
used prompt: ```"(megumin_style:1.1), messy hair, 1girl, smooth, smooth shading, brown hair, short hair, red eyes, long sideburns, sidelocks, volumetric lighting, slim lips, cinematic lighting, backlighting, lightray, black choker, slim lips, portrait, looking at viewer, white background, closed mouth, (small breasts:1.2), solo, solo focus, full body, arms behind back, (close-up:1.1), witch hat, smile, clothed, clothing"```
<table>
<tr>
<td><img src=https://i.ibb.co/X8d4tWw/20221103066047-4267076194-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
<td><img src=https://i.ibb.co/mhPQ4g2/20221103066051-4183082107-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
<td><img src=https://i.ibb.co/XFx9Khw/20221103066053-4183082108-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.ibb.co/FhpMv2L/20221103066055-4183082109-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
<td><img src=https://i.ibb.co/YtRsHbN/20221103066057-4183082110-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
<td><img src=https://i.ibb.co/j4P9tN5/20221103066059-4183082111-megumin-style-1-1-messy-hair-1girl-smooth-smooth-shading-brown-hair-short.png width=100% height=100%/></td>
</tr>
</table>
</details>
<details>
<summary>License</summary>
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
</details>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.