id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
amitness/korpus_malti_press | 2023-08-15T13:49:33.000Z | [
"language:mt",
"region:us"
] | amitness | null | null | null | 0 | 4 | ---
language: mt
dataset_info:
features:
- name: category
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
sequence: string
- name: subtitle
dtype: string
- name: source
dtype: string
- name: year
dtype: 'null'
- name: text_raw
sequence: string
splits:
- name: raw
num_bytes: 163668738
num_examples: 44824
download_size: 0
dataset_size: 163668738
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
---
# Dataset Card for "korpus_malti_press"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
declare-lab/InstructEvalImpact | 2023-06-09T08:53:22.000Z | [
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] | declare-lab | null | null | null | 6 | 4 | ---
license: apache-2.0
size_categories:
- n<1K
ArXiv: 2306.04757
---
# Project Links
# Dataset Description
The IMPACT dataset contains 50 human created prompts for each category, 200 in total, to test LLMs general writing ability.
Instructed LLMs demonstrate promising ability in writing-based tasks, such as composing letters or ethical debates. This dataset consists prompts across 4 diverse usage scenarios:
- **Informative Writing**: User queries such as self-help advice or explanations for various concept
- **Professional Writing**: Format involves suggestions presentations or emails in a business setting
- **Argumentative Writing**: Debate positions on ethical and societal question
- **Creative Writing**: Diverse writing formats such as stories, poems, and songs.
The IMPACT dataset is included in our [InstructEval Benchmark Suite](https://github.com/declare-lab/instruct-eval).
# Evaluation Results
We leverage ChatGPT to judge the quality of the generated answers by LLMs. In terms of:
- Relevance: how well the answer engages with the given prompt
- Coherence: general text quality such as organization and logical flow
Each answer is scored on a Likert scale from 1 to 5. We evaluate the models in the zero-shot
setting based on the given prompt and perform sampling-based decoding with a temperature of 1.0
| **Model** | **Size** | **Informative** | | **Professional** | | **Argumentative** | | **Creative** | | **Avg.** | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| | | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. | Rel. | Coh. |
| **ChatGPT** | - | 3.34 | 3.98 | 3.88 | 3.96 | 3.96 | 3.82 | 3.92 | 3.94 | 3.78 | 3.93 |
| [**Flan-Alpaca**](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | 3.56 | 3.46 | 3.54 | 3.70 | 3.22 | 3.28 | 3.70 | 3.40 | 3.51 | 3.46 |
| [**Dolly-V2**](https://huggingface.co/databricks/dolly-v2-12b) | 12 B | 3.54 | 3.64 | 2.96 | 3.74 | 3.66 | 3.20 | 3.02 | 3.18 | 3.30 | 3.44 |
| [**StableVicuna**](https://huggingface.co/TheBloke/stable-vicuna-13B-HF) | 13B | 3.54 | 3.64 | 2.96 | 3.74 | 3.30 | 3.20 | 3.02 | 3.18 | 3.21 | 3.44 |
| [**Flan-T5**](https://huggingface.co/google/flan-t5-xxl) | 11B | 2.64 | 3.24 | 2.62 | 3.22 | 2.54 | 3.40 | 2.50 | 2.72 | 2.58 | 3.15 |
# Citation
Please consider citing the following article if you found our work useful:
```
bibtex
@article{chia2023instructeval,
title={INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models},
author={Yew Ken Chia and Pengfei Hong and Lidong Bing and Soujanya Poria},
journal={arXiv preprint arXiv:2306.04757},
year={2023}
}
```
|
Binaryy/travel_sample | 2023-06-09T11:53:34.000Z | [
"region:us"
] | Binaryy | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 41063
num_examples: 20
download_size: 29530
dataset_size: 41063
---
# Dataset Card for "travel_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shawarmas/built-in-dictionary.txt | 2023-07-14T09:52:06.000Z | [
"region:us"
] | shawarmas | null | null | null | 1 | 4 | Entry not found |
polejowska/cd45rb | 2023-06-10T08:06:52.000Z | [
"region:us"
] | polejowska | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
'0': leukocyte
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: segmentation
list:
list: float32
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 35879463408.88
num_examples: 18421
- name: valid
num_bytes: 3475442128.938
num_examples: 1781
- name: test
num_bytes: 4074586864.944
num_examples: 2116
download_size: 43275144782
dataset_size: 43429492402.762
---
# Dataset Card for "cd45rb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eastwind/semeval-2016-absa-reviews-english-translated-resampled | 2023-06-11T10:17:43.000Z | [
"license:mit",
"region:us"
] | eastwind | null | null | null | 0 | 4 | ---
license: mit
---
# Dataset Card for Hotel Review ABSA (SemEval 2016 Translated from Arabic)
## Dataset Description
Derived from eastwind/semeval-2016-absa-reviews-english-translated-stanford-alpaca, by upsampling the neutral class and then resampling 3k examples from each class |
vietgpt/OSCAR-2201 | 2023-06-13T05:00:30.000Z | [
"region:us"
] | vietgpt | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 15978372237.047762
num_examples: 1700386
download_size: 6412125570
dataset_size: 15978372237.047762
---
# Dataset Card for "OSCAR-2201"
Num tokens: 2,682,681,285 tokens |
vietgpt/OSCAR-2109 | 2023-06-13T04:53:37.000Z | [
"region:us"
] | vietgpt | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 16802536783.756039
num_examples: 5098334
download_size: 8245526034
dataset_size: 16802536783.756039
---
# Dataset Card for "OSCAR-2109"
Num tokens: 2,884,522,212 tokens |
jondurbin/airoboros-gpt4-1.2 | 2023-06-22T15:00:42.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | jondurbin | null | null | null | 18 | 4 | ---
license: cc-by-nc-4.0
---
A continuation of [gpt4-1.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with:
* over 1000 new coding instructions, along with several hundred prompts using `PLAINFORMAT` to *hopefully* allow non-markdown/backtick/verbose code generation
* nearly 4000 additional math/reasoning, but this time using the ORCA style "[prompt]. Explain like I'm five." / Justify your logic, etc.
* several hundred roleplaying data
* additional misc/general data
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
Ali-C137/Guanaco-oasst1_Originals_Arabic_pairs | 2023-06-13T17:48:47.000Z | [
"region:us"
] | Ali-C137 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: translated_text
dtype: string
splits:
- name: train
num_bytes: 38713258
num_examples: 10364
download_size: 20094755
dataset_size: 38713258
---
# Dataset Card for "Guanaco-oasst1_Originals_Arabic_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/VisDial_modif-Sample | 2023-06-13T17:52:38.000Z | [
"region:us"
] | HuggingFaceM4 | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: caption
dtype: string
- name: dialog
sequence:
sequence: string
- name: image_path
dtype: string
- name: global_image_id
dtype: string
- name: anns_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
sequence: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 164280536.5563279
num_examples: 1000
- name: validation
num_bytes: 162457052.0348837
num_examples: 1000
- name: test
num_bytes: 162318287.0
num_examples: 1000
download_size: 458274072
dataset_size: 489055875.5912116
---
# Dataset Card for "VisDial_modif-Sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
agkphysics/AudioSet | 2023-07-13T12:25:32.000Z | [
"task_categories:audio-classification",
"license:cc-by-4.0",
"audio",
"region:us"
] | agkphysics | null | null | null | 1 | 4 | ---
license: cc-by-4.0
tags:
- audio
task_categories:
- audio-classification
---
# AudioSet data
This repository contains the balanced training set and evaluation set
of the [AudioSet data](
https://research.google.com/audioset/dataset/index.html). The YouTube
videos were downloaded in March 2023, and so not all of the original
audios are available.
Extracting the `*.tar` files will place audio clips into the `audio/`
directory. The distribuion of audio clips is as follows:
- `audio/bal_train`: 18685 audio clips out of 22160 originally.
- `audio/eval`: 17142 audio clips out of 20371 originally.
Most audio is sampled at 48 kHz 24 bit, but about 10% is sampled at
44.1 kHz 24 bit. Audio files are stored in the FLAC format.
## Citation
```bibtex
@inproceedings{45857,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
|
yyu/arxiv-attrprompt | 2023-09-13T20:57:33.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"multilabel_classification",
"arxiv",
"scientific_papers",
"arxiv:2306.15895",
"region:us"
] | yyu | null | null | null | 1 | 4 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- multilabel_classification
- arxiv
- scientific_papers
size_categories:
- 10K<n<100K
version:
- V1
---
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
See the paper: https://arxiv.org/abs/2306.15895 for details.
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt.
**Note**: Different than the other datasets, the `labels` for training/validation/test data are all a *list* instead of an integer as it is a multi-label classification dataset. |
KimuGenie/KLUE_mrc_negative_train | 2023-06-22T04:18:36.000Z | [
"task_categories:question-answering",
"language:ko",
"license:cc-by-4.0",
"arxiv:2105.09680",
"region:us"
] | KimuGenie | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: document_id
dtype: int64
- name: hard_negative_text
sequence: string
- name: hard_negative_document_id
sequence: int64
- name: hard_negative_title
sequence: string
splits:
- name: train
num_bytes: 205021808
num_examples: 3952
- name: validation
num_bytes: 12329366
num_examples: 240
download_size: 124133126
dataset_size: 217351174
license: cc-by-4.0
task_categories:
- question-answering
language:
- ko
---
# Dataset Card for "KLUE_mrc_negative_train"
KLUE mrc train dataset에 BM25을 이용해서 question에 대한 hard negative text 20개를 추가한 데이터입니다.
BM25로 hard negative text를 찾았고, preprocessing을 통해 중복 데이터를 최대한 삭제했습니다.
사용한 BM25의 정보는 아래와 같습니다.
|top-k|top-10|top-20|top-50|top-100|
|-|-|-|-|-|
|accuracy(%)|92.1|95.0|97.1|98.8|
# Citation
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
maximoss/rte3-french | 2023-09-08T08:57:36.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | maximoss | null | null | null | 0 | 4 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- fr
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The RTE3-FR dataset is the French translation of the Textual Entailment English dataset used in the [RTE-3 Challenge](https://nlp.stanford.edu/RTE3-pilot/).
Like its English counterpart, the French RTE-3 dataset is composed of a development set and a test set, each containing 800 T/H pairs.
All T/H pairs were manually translated into French and proofread.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Index number.
- `language`: The language of the concerned pair of sentences.
- `premise`: The translated premise in the target language.
- `hypothesis`: The translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `task`: The particular NLP task that the data was drawn from (IE, IR, QA and SUM).
- `length`: The length of the text of the pair.
### Data Splits
| name |entailment|neutral|contradiction|
|-------------|---------:|------:|------------:|
| dev | 412 | 299 | 89 |
| test | 410 | 318 | 72 |
| name |short|long|
|-------------|----:|---:|
| dev | 665 | 135|
| test | 683 | 117|
| name | IE| IR| QA|SUM|
|-------------|--:|--:|--:|--:|
| dev |200|200|200|200|
| test |200|200|200|200|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Acknowledgements
This work was supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.
### Contributions
[More Information Needed] |
HausaNLP/Naija-Lex | 2023-06-18T16:13:08.000Z | [
"multilinguality:monolingual",
"multilinguality:multilingual",
"language:hau",
"language:ibo",
"language:yor",
"license:cc-by-nc-sa-4.0",
"sentiment analysis, Twitter, tweets",
"stopwords",
"region:us"
] | HausaNLP | Naija-Stopwords is a part of the Naija-Senti project. It is a list of collected stopwords from the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá. | @inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\"\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
} | null | 0 | 4 | ---
license: cc-by-nc-sa-4.0
tags:
- sentiment analysis, Twitter, tweets
- stopwords
multilinguality:
- monolingual
- multilingual
language:
- hau
- ibo
- yor
pretty_name: NaijaStopwords
---
# Naija-Lexicons
Naija-Lexicons is a part of the [Naija-Senti](https://huggingface.co/datasets/HausaNLP/NaijaSenti-Twitter) project. It is a list of collected stopwords from the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá.
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords
- **Repository:** [GitHub](https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://aclanthology.org/2022.lrec-1.63/)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Hassan Muhammad](shamsuddeen2004@gmail.com)
### Languages
3 most indigenous Nigerian languages
* Hausa (hau)
* Igbo (ibo)
* Yoruba (yor)
## Dataset Structure
### Data Instances
List of lexicons instances in each of the 3 languages with their sentiment labels.
```
{
"word": "string",
"label": "string"
}
```
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Hausa). This download manually created and translated lexicons.
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau")
# you can load specific languages (e.g., Hausa). You may also specify the split you want to downloaf
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau", split = "manual")
```
## Additional Information
### Dataset Curators
* Shamsuddeen Hassan Muhammad
* Idris Abdulmumin
* Ibrahim Said Ahmad
* Bello Shehu Bello
### Licensing Information
This Naija-Lexicons dataset is licensed under a Creative Commons Attribution BY-NC-SA 4.0 International License
### Citation Information
```
@inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\'\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
}
```
### Contributions
> This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute. |
winglian/visual-novels-json | 2023-06-17T03:08:49.000Z | [
"region:us"
] | winglian | null | null | null | 0 | 4 | Entry not found |
renumics/beans-outlier | 2023-06-30T20:09:45.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:mit",
"region:us"
] | renumics | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
pretty_name: Beans
dataset_info:
features:
- name: image_file_path
dtype: string
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': angular_leaf_spot
'1': bean_rust
'2': healthy
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
dtype: image
splits:
- name: train
num_bytes: 293531811.754
num_examples: 1034
download_size: 0
dataset_size: 293531811.754
---
# Dataset Card for "beans-outlier"
📚 This dataset is an enhancved version of the [ibean project of the AIR lab](https://github.com/AI-Lab-Makerere/ibean/).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/beans-outlier>

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/beansoutlier", split="train")
df = ds.to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["labels"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
``` |
sert121/SpiderSQL | 2023-06-19T18:09:01.000Z | [
"license:mit",
"region:us"
] | sert121 | null | null | null | 0 | 4 | ---
license: mit
---
|
sadmoseby/oassist_transformed | 2023-06-19T20:18:47.000Z | [
"region:us"
] | sadmoseby | null | null | null | 0 | 4 | Entry not found |
AhmedSSoliman/CodeSearchNet | 2023-06-20T09:17:15.000Z | [
"license:ms-pl",
"region:us"
] | AhmedSSoliman | null | null | null | 0 | 4 | ---
license: ms-pl
---
|
timpal0l/scandisent | 2023-06-21T13:39:40.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sv",
"language:no",
"language:da",
"language:en",
"language:fi",
"license:openrail",
"arxiv:2104.10441",
"region:us"
] | timpal0l | null | null | null | 1 | 4 | ---
license: openrail
task_categories:
- text-classification
language:
- sv
- no
- da
- en
- fi
pretty_name: ScandiSent
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/timpal0l/ScandiSent**
- **Paper: https://arxiv.org/pdf/2104.10441.pdf**
- **Leaderboard:**
- **Point of Contact: Tim Isbister**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
IIC/livingner3 | 2023-06-21T15:31:48.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"biomedical",
"clinical",
"spanish",
"region:us"
] | IIC | null | null | null | 0 | 4 | ---
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-label-classification
license:
- cc-by-4.0
pretty_name: LivingNER3
train-eval-index:
- task: text-classification
task_id: multi_label_classification
splits:
train_split: train
eval_split: test
metrics:
- type: f1
name: f1
---
# LivingNER
This is a third party reupload of the [LivingNER](https://temu.bsc.es/livingner/) task 3 dataset.
It only contains the task 3 for the Spanish language. It does not include the multilingual data nor the background data.
This dataset is part of a benchmark in the paper [TODO](TODO).
### Citation Information
```bibtex
TODO
```
### Citation Information of the original dataset
```bibtex
@article{amiranda2022nlp,
title={Mention detection, normalization \& classification of species, pathogens, humans and food in clinical documents: Overview of LivingNER shared task and resources},
author={Miranda-Escalada, Antonio and Farr{'e}-Maduell, Eul{`a}lia and Lima-L{'o}pez, Salvador and Estrada, Darryl and Gasc{'o}, Luis and Krallinger, Martin},
journal = {Procesamiento del Lenguaje Natural},
year={2022}
}
```
|
Jingmiao/PUZZLEQA | 2023-06-28T02:56:19.000Z | [
"language:en",
"license:apache-2.0",
"arxiv:2306.12255",
"region:us"
] | Jingmiao | null | null | null | 0 | 4 | ---
language:
- en
license: apache-2.0
---
### Acknowledgements
The PUZZLEQA is scraped from [NPR Sunday Puzzle Official Website](https://www.npr.org/series/4473090/sunday-puzzle) and [NPR Puzzle Synopsis](https://groups.google.com/g/nprpuzzle),
made by a group of fans by running a mailing list that distributed questions and answers for each week’s puzzle.
The authors of the dataset cleaned the data and made some multiple choice based on the question and answers.
### Creation
The Multiple Choice Dataset is generated from PUZZLEQA dataset using the following algorithm.
1. Read the fr_big_exp.tsv.tsv file
2. Group rule-question-answer triples in a given Sunday together (so the rules of each question will be the same)
3. For each question, randomly select three other answers from answers on the same Sunday. Shuffle 3 selected answers with the correct answer for the given question to obtain 4 choices for this question. \\
4. identify the correct answer for the given question as the "gold" answer.
Recent.tsv is the dataset based on the NPR PUZZLE in 2023.
# Citation
@inproceedings{zhao2023solving,
title={Solving and Generating NPR Sunday Puzzles with Large Language Models},
author={Jingmiao Zhao and Carolyn Jane Anderson},
year={2023},
eprint={2306.12255},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
ChanceFocus/flare-ner | 2023-07-27T00:02:41.000Z | [
"license:mit",
"region:us"
] | ChanceFocus | null | null | null | 0 | 4 | ---
license: mit
dataset_info:
features:
- name: query
dtype: string
- name: answer
dtype: string
- name: label
sequence: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 470523
num_examples: 408
- name: valid
num_bytes: 101644
num_examples: 103
- name: test
num_bytes: 156592
num_examples: 98
download_size: 224350
dataset_size: 728759
---
|
theonlydo/indonesia-slang | 2023-07-06T18:25:43.000Z | [
"region:us"
] | theonlydo | null | null | null | 0 | 4 | |
atom-in-the-universe/fanfics-10k-10k | 2023-06-23T09:28:54.000Z | [
"region:us"
] | atom-in-the-universe | null | null | null | 0 | 4 | Entry not found |
caldervf/cicero_dataset_with_embeddings_and_faiss_index | 2023-06-24T08:15:45.000Z | [
"region:us"
] | caldervf | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
- name: content_filtered
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 19279400
num_examples: 1143
download_size: 13285598
dataset_size: 19279400
---
# Dataset Card for "cicero_dataset_with_embeddings_and_faiss_index"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanceFocus/flare-finqa | 2023-08-18T20:03:26.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27056024
num_examples: 6251
- name: valid
num_bytes: 3764872
num_examples: 883
- name: test
num_bytes: 4846110
num_examples: 1147
download_size: 0
dataset_size: 35667006
---
# Dataset Card for "flare-finqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
layoric/labeled-multiple-choice-explained | 2023-06-26T00:10:58.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | layoric | null | null | null | 0 | 4 | ---
license: unknown
language:
- en
size_categories:
- 1K<n<10K
---
This dataset is based on `under-tree/labeled-multiple-choice` but using GPT-3.5-turbo to generate explanations for each answer option.
This was a very basic attempt to follow the Orca paper approach of a 'teacher' model to provide more context to some trivia questions.
Questions were deduplicated based on the question text.
I used the python library `guidance` to help generate the prompts. Below is the prompt template I used.
```
{{#role 'system'~}}
You are an AI assistant that helps people find information. User will give you a question. Your task is to answer as faithfully as you can, and most importantly, provide explanation why incorrect answers are not correct. While answering think step-by-step and justify your answer.
{{~/role}}
{{#role 'user'~}}
USER:
Topic: {{topic}}
Question: {{question}}
### Answer
The correct answer is:
{{answer_key}}). {{answer}}
### Explanation:
Let's break it down step by step.
1. Read the question and options carefully.
2. Identify the differences between the options.
3. Determine which options are not logical based on the difference.
4. Go through each incorrect answer providing an explanation why it is incorrect.
{{~/role}}
{{#role 'assistant'~}}
{{~gen 'explanation'}}
{{~/role}}
``` |
FreedomIntelligence/alpaca-gpt4-french | 2023-08-06T08:09:08.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | null | 0 | 4 | ---
license: apache-2.0
---
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
wisenut-nlp-team/namu | 2023-07-10T07:46:04.000Z | [
"license:cc-by-4.0",
"region:us"
] | wisenut-nlp-team | null | null | null | 0 | 4 | ---
license: cc-by-4.0
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: contributors
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 8757569508
num_examples: 867023
download_size: 4782924595
dataset_size: 8757569508
---
```
from datasets import load_dataset
raw_dataset = load_dataset(
"wisenut-nlp-team/namu",
"raw",
use_auth_token="<your personal/api token>"
)
processed_dataset = load_dataset(
"wisenut-nlp-team/namu",
"processed",
use_auth_token="<your personal/api token>"
)
```
|
barbaroo/Faroese_BLARK_small | 2023-08-07T14:47:31.000Z | [
"task_categories:text-generation",
"language:fo",
"region:us"
] | barbaroo | null | null | null | 0 | 4 | ---
task_categories:
- text-generation
language:
- fo
---
# Dataset Card for Faroese_BLARK_small
## Dataset Description
All sentences are retrieved from:
- **Paper:**
Annika Simonsen, Sandra Saxov Lamhauge, Iben Nyholm Debess, and Peter Juel Henrichsen. 2022. Creating a Basic Language Resource Kit for Faroese. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4637–4643, Marseille, France. European Language Resources Association.
### Dataset Summary
This dataset is a filtered version of the corpus (35.6 M tokens) first published as BLARK - Basic Language Resource Kit for Faroese.
The pre-processing and filtering steps include:
- Normalize format to utf-8
- Remove shorter sentences (less than 10 units, where units are separated by spaces)
- Remove archaic Faroese
- Remove separators ('\r', '\t', '\n')
- Remove non standard formatting. Examples: '§§', ' | ', '**', ' • ', ' • ', '.- ', ': ?', '.?', '\xa0', '\xad', '_ _', '. .', etc.
- Remove (most) numbered lists, of formats: 1), 1:, Stk. 1 etc.
- Replace arbitrary number of question/exclamation marks and full-stops with 1. Example: !!!!!! -> !
- Remove websites that start with http
- Remove sentences without (or with little) linguistic content. In practice: all sentences where more than half of the characters (excluding spaces) are number, punctuations and letters in caps-lock (acronyms and initials)
- Remove duplicates
### Supported Tasks and Leaderboards
Suitable for MLM and CLM
|
anzorq/kbd_speech | 2023-10-08T18:12:13.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:kbd",
"region:us"
] | anzorq | null | null | null | 1 | 4 | ---
language:
- kbd
task_categories:
- automatic-speech-recognition
- text-to-speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: gender
dtype: string
- name: country
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 193658385.11
num_examples: 20555
download_size: 518811329
dataset_size: 193658385.11
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kbd_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/MacBook-Attacks-Dataset | 2023-09-14T16:54:13.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"region:us"
] | TrainingDataPro | The dataset consists of videos of replay attacks played on different
models of MacBooks. The dataset solves tasks in the field of anti-spoofing and
it is useful for buisness and safety systems.
The dataset includes: **replay attacks** - videos of real people played on
a computer and filmed on the phone. | @InProceedings{huggingface:dataset,
title = {MacBook-Attacks-Dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
dataset_info:
features:
- name: file
dtype: string
- name: phone
dtype: string
- name: computer
dtype: string
- name: gender
dtype: string
- name: age
dtype: int16
- name: country
dtype: string
splits:
- name: train
num_bytes: 1418
num_examples: 24
download_size: 573934283
dataset_size: 1418
---
# Antispoofing Replay Dataset
The dataset consists of videos of replay attacks played on different models of MacBooks. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
The dataset includes: **replay attacks** - videos of real people played on a computer and filmed on the phone.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=MacBook-Attacks-Dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
The folder "attacks" includes videos of replay attack
### Models of MacBooks in the datset:
- MacBook 13
- MacBook Air
- MacBook Air 7
- MacBook Air 11
- MacBook Air 13
- MacBook Air M1
- MacBook Pro 12
- MacBook Pro 13
### File with the extension .csv
includes the following information for each media file:
- **file**: link to access the replay video,
- **phone**: the device used to capture the replay video,
- **computer**: the device used to play the video,
- **gender**: gender of a person in the video,
- **age**: age of the person in the video,
- **country**: country of the person
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=MacBook-Attacks-Dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/monitors-replay-attacks-dataset | 2023-09-14T16:54:44.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"legal",
"region:us"
] | TrainingDataPro | The dataset consists of videos of replay attacks played on different models of
computers. The dataset solves tasks in the field of anti-spoofing and it is
useful for buisness and safety systems.
The dataset includes: **replay attacks** - videos of real people played
on a computer and filmed on the phone. | @InProceedings{huggingface:dataset,
title = {monitors-replay-attacks-dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- legal
dataset_info:
features:
- name: file
dtype: string
- name: phone
dtype: string
- name: computer
dtype: string
- name: gender
dtype: string
- name: age
dtype: int16
- name: country
dtype: string
splits:
- name: train
num_bytes: 588
num_examples: 10
download_size: 342902185
dataset_size: 588
---
# Monitors Replay Attacks Dataset
The dataset consists of videos of replay attacks played on different models of computers. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
The dataset includes: **replay attacks** - videos of real people played on a computer and filmed on the phone.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=monitors-replay-attacks-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
The folder "attacks" includes videos of replay attacks
### Computer companies in the datset:
- Dell
- LG
- ASUS
- HP
- Redmi
- AOC
- Samsung
### File with the extension .csv
includes the following information for each media file:
- **file**: link to access the replay video,
- **phone**: the device used to capture the replay video,
- **computer**: the device used to play the video,
- **gender**: gender of a person in the video,
- **age**: age of the person in the video,
- **country**: country of the person
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=monitors-replay-attacks-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
ammarnasr/the-stack-java-clean | 2023-08-14T21:18:42.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:code",
"license:openrail",
"code",
"region:us"
] | ammarnasr | null | null | null | 0 | 4 | ---
license: openrail
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 3582248477.9086223
num_examples: 806789
- name: test
num_bytes: 394048264.9973618
num_examples: 88747
- name: valid
num_bytes: 3982797.09401595
num_examples: 897
download_size: 1323156008
dataset_size: 3980279540
task_categories:
- text-generation
language:
- code
tags:
- code
pretty_name: TheStack-Java
size_categories:
- 1M<n<10M
---
## Dataset 1: TheStack - Java - Cleaned
**Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Java, a popular statically typed language.
**Target Language**: Java
**Dataset Size**:
- Training: 900,000 files
- Validation: 50,000 files
- Test: 50,000 files
**Preprocessing**:
1. Selected Java as the target language due to its popularity on GitHub.
2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
3. Split files into 90% training, 5% validation, and 5% test sets.
**Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
**Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning). |
TrainingDataPro/anti-spoofing-real-waist-high-dataset | 2023-09-14T16:55:22.000Z | [
"task_categories:video-classification",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"legal",
"region:us"
] | TrainingDataPro | The dataset consists of waist-high selfies and video of real people.
The dataset solves tasks in the field of anti-spoofing and it is useful
for buisness and safety systems. | @InProceedings{huggingface:dataset,
title = {anti-spoofing-real-waist-high-dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
- image-to-image
language:
- en
tags:
- legal
dataset_info:
features:
- name: photo
dtype: image
- name: video
dtype: string
- name: phone
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 34728975
num_examples: 8
download_size: 195022198
dataset_size: 34728975
---
# Anti-Spoofing Real Waist-High Dataset
The dataset consists of waist-high selfies and video of real people. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
### The dataset includes 2 different types of files:
- **Photo** - a selfie of a person from a mobile phone, the person is depicted alone on it, the face is clearly visible. Person is presented waist-high.
- **Video** - filmed on the front camera, on which a person moves his/her head left, right, up and down. Duration of the video is from 10 to 20 seconds.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing-real-waist-high-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- The folder **"photo"** includes selfies of people
- The folder **"video"** includes videos of people
### File with the extension .csv
includes the following information for each media file:
- **photo**: link to access the selfie,
- **video**: link to access the video,
- **phone**: the device used to capture selfie and video,
- **gender**: gender of a person,
- **age**: age of the person,
- **country**: country of the person
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing-real-waist-high-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
Ali-C137/Darija-Stories-Dataset | 2023-07-29T13:54:28.000Z | [
"task_categories:text-generation",
"language:ar",
"license:cc-by-nc-4.0",
"region:us"
] | Ali-C137 | null | null | null | 3 | 4 | ---
dataset_info:
features:
- name: ChapterName
dtype: string
- name: ChapterLink
dtype: string
- name: Author
dtype: string
- name: Text
dtype: string
- name: Tags
dtype: int64
splits:
- name: train
num_bytes: 476926644
num_examples: 6142
download_size: 241528641
dataset_size: 476926644
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- ar
pretty_name: Darija (Moroccan Arabic) Stories Dataset
---
# Dataset Card for "Darija-Stories-Dataset"
**Darija (Moroccan Arabic) Stories Dataset is a large-scale collection of stories written in Moroccan Arabic dialect (Darija).**
## Dataset Description
Darija (Moroccan Arabic) Stories Dataset contains a diverse range of stories that provide insights into Moroccan culture, traditions, and everyday life. The dataset consists of textual content from various chapters, including narratives, dialogues, and descriptions. Each story chapter is associated with a URL link for online reading or reference. The dataset also includes information about the author and tags that provide additional context or categorization.
## Dataset Details
- **Homepage:** https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset
- **Author:** Elfilali Ali
- **Email:** ali.elfilali00@gmail.com, alielfilali0909@gmail.com
- **Github Profile:** [https://github.com/alielfilali01](https://github.com/alielfilali01)
- **LinkedIn Profile:** [https://www.linkedin.com/in/alielfilali01/](https://www.linkedin.com/in/alielfilali01/)
## Dataset Size
The Darija (Moroccan Arabic) Stories Dataset is the largest publicly available dataset in Moroccan Arabic dialect (Darija) to date, with over 70 million tokens.
## Potential Use Cases
- **Arabic Dialect NLP:** Researchers can utilize this dataset to develop and evaluate NLP models specifically designed for Arabic dialects, with a focus on Moroccan Arabic (Darija). Tasks such as dialect identification, part-of-speech tagging, and named entity recognition can be explored.
- **Sentiment Analysis:** The dataset can be used to analyze sentiment expressed in Darija stories, enabling sentiment classification, emotion detection, or opinion mining within the context of Moroccan culture.
- **Text Generation:** Researchers and developers can leverage the dataset to generate new stories or expand existing ones using various text generation techniques, facilitating the development of story generation systems specifically tailored for Moroccan Arabic dialect.
## Dataset Access
The Darija (Moroccan Arabic) Stories Dataset is available for academic and non-commercial use, under a Creative Commons Non Commercial license.
## Citation
Please use the following citation when referencing the Darija (Moroccan Arabic) Stories Dataset:
```
@dataset{
title = {Darija (Moroccan Arabic) Stories Dataset},
author = {Elfilali Ali},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset},
year = {2023},
}
```
|
crumb/flan-ul2-tinystories | 2023-07-02T04:47:47.000Z | [
"language:en",
"license:mit",
"region:us"
] | crumb | null | null | null | 2 | 4 | ---
license: mit
language:
- en
---
Around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a short story using the vocabulary of a first-grader." to be used in an experimental curriculum learning setting. I had to checkpoint every 1024 examples to mitigate the program slowing down due to memory usage. This was run in bf16 on an RTXA6000 with the following settings:
```
top_k = random between (40, 128)
temperature = random between (0.6, 0.95)
max_length = 128
batch_size = 32
```
I wanted a less uniform boring set with the same exact patterns so I randomly modulate the temperature and top_k values to get a good mix. This cost ~$6 usd to create on runpod. |
Symato/c4_vi-filtered_200GB | 2023-07-03T11:53:47.000Z | [
"region:us"
] | Symato | null | null | null | 0 | 4 | Entry not found |
bias-amplified-splits/mnli | 2023-07-04T11:48:21.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:1704.05426",
"region:us"
] | bias-amplified-splits | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | null | 0 | 4 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 58497575
num_examples: 309873
- name: train.anti_biased
num_bytes: 16122071
num_examples: 82829
- name: validation_matched.biased
num_bytes: 1443678
num_examples: 7771
- name: validation_matched.anti_biased
num_bytes: 390105
num_examples: 2044
- name: validation_mismatched.biased
num_bytes: 1536381
num_examples: 7797
- name: validation_mismatched.anti_biased
num_bytes: 412850
num_examples: 2035
download_size: 92308759
dataset_size: 78402660
- config_name: partial_input
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 59529986
num_examples: 309873
- name: train.anti_biased
num_bytes: 15089660
num_examples: 82829
- name: validation_matched.biased
num_bytes: 1445996
num_examples: 7745
- name: validation_matched.anti_biased
num_bytes: 387787
num_examples: 2070
- name: validation_mismatched.biased
num_bytes: 1529878
num_examples: 7758
- name: validation_mismatched.anti_biased
num_bytes: 419353
num_examples: 2074
download_size: 92308759
dataset_size: 78402660
task_categories:
- text-classification
language:
- en
pretty_name: MultiNLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for MultiNLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [MultiNLI](https://arxiv.org/abs/1704.05426)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to **MultiNLI**, a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 91.1 | 74.3 |
| Biased training split | 88.7 | 57.5 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 91.1 | 81.4 |
| Biased training split | 89.5 | 71.8 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/mnli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation_matched.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from MultiNLI (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 0,
"premise": "Your contribution helped make it possible for us to provide our students with a quality education.",
"hypothesis": "Your contributions were of no help with our students' education.",
"label": 2
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation matched)
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: one of `0`, `1` and `2` (`entailment`, `neutral`, and `contradiction`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|-------------------------------------|------------------------------|
| Train - biased | 309873 |
| Train - anti-biased | 82829 |
| Validation matched - biased | 7771 |
| Validation matched - anti-biased | 2044 |
| Validation mismatched - biased | 7797 |
| Validation mismatched - anti-biased | 2035 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|-------------------------------------|------------------------------|
| Train - biased | 309873 |
| Train - anti-biased | 82829 |
| Validation matched - biased | 7745 |
| Validation matched - anti-biased | 2070 |
| Validation mismatched - biased | 7758 |
| Validation mismatched - anti-biased | 2074 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
MultiNLI was developed by Adina Williams, Nikita Nangia and Samuel Bowman.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
``` |
bias-amplified-splits/anli | 2023-07-04T11:49:28.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2305.18917",
"arxiv:1910.14599",
"region:us"
] | bias-amplified-splits | The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
ANLI is much more difficult than its predecessors including SNLI and MNLI.
It contains three rounds. Each round has train/dev/test splits. | @InProceedings{nie2019adversarial,
title={Adversarial NLI: A New Benchmark for Natural Language Understanding},
author={Nie, Yixin
and Williams, Adina
and Dinan, Emily
and Bansal, Mohit
and Weston, Jason
and Kiela, Douwe},
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
publisher = "Association for Computational Linguistics",
} | null | 0 | 4 | ---
license: cc-by-nc-4.0
dataset_info:
- config_name: minority_examples
features:
- name: round
dtype: string
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: reason
dtype: string
splits:
- name: train.biased
num_bytes: 61260115
num_examples: 134068
- name: train.anti_biased
num_bytes: 13246263
num_examples: 28797
- name: validation.biased
num_bytes: 1311433
num_examples: 2317
- name: validation.anti_biased
num_bytes: 500409
num_examples: 883
- name: test.biased
num_bytes: 1284544
num_examples: 2262
- name: test.anti_biased
num_bytes: 539798
num_examples: 938
download_size: 86373189
dataset_size: 78142562
- config_name: partial_input
features:
- name: round
dtype: string
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: reason
dtype: string
splits:
- name: train.biased
num_bytes: 60769911
num_examples: 134068
- name: train.anti_biased
num_bytes: 13736467
num_examples: 28797
- name: validation.biased
num_bytes: 1491254
num_examples: 2634
- name: validation.anti_biased
num_bytes: 320588
num_examples: 566
- name: test.biased
num_bytes: 1501586
num_examples: 2634
- name: test.anti_biased
num_bytes: 322756
num_examples: 566
download_size: 86373189
dataset_size: 78142562
task_categories:
- text-classification
language:
- en
pretty_name: Adversarial NLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for Adversarial NLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [ANLI](https://arxiv.org/abs/1910.14599)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to Adversarial Natural Language Inference (ANLI), a large-scale NLI benchmark dataset. The dataset was collected via an iterative, adversarial human-and-model-in-the-loop procedure. ANLI is much more difficult than its predecessors including SNLI and MNLI.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 67.5 | 58.3 |
| Biased training split | 60.6 | 21.4 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 67.5 | 50.0 |
| Biased training split | 62.5 | 28.3 |
#### Loading the Data
ANLI contains three rounds of data collection, and each round has train/dev/test splits. We concatenated the splits from all rounds to create one train/dev/test splits.
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/anli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from ANLI, and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"round": "r1",
"idx": "20a331ee-cf54-4e8a-9ff9-6152cd679780",
"premise": "Milton Teagle "Richard" Simmons (born July 12, 1948) is an American fitness guru, actor, and comedian. He promotes weight-loss programs, prominently through his "Sweatin' to the Oldies" line of aerobics videos and is known for his eccentric, flamboyant, and energetic personality.",
"hypothesis": "Milton Teagle "Richard" Simmons created his "Sweatin' to the Oldies" line of aerobics videos without help or input from anyone else.",
"label": 1,
"reason": "The context gives no information as to how the "Sweatin' to the Oldies" videos are produced, Simmons may well produce them alone, or may produce them with a team. The system may have had difficulty with this because it is unlikely that Simmons produced the videos alone."
}
```
### Data Fields
- `round`: which round of data collection the example comes from (one of `r1`, `r2` and `r3`)
- `uid`: unique identifier for the example.
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: one of `0`, `1` and `2` (`entailment`, `neutral`, and `contradiction`)
- `reason`: explanation why the label is true (only for some examples).
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 134068 |
| Train - anti-biased | 28797 |
| Validation - biased | 2317 |
| Validation - anti-biased | 883 |
| Test - biased | 2262 |
| Test - anti-biased | 938 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 134068 |
| Train - anti-biased | 28797 |
| Validation - biased | 2634 |
| Validation - anti-biased | 566 |
| Test - biased | 2634 |
| Test - anti-biased | 566 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
ANLI was developed by Adina Williams, Tristan Thrush and Douwe Kiela.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@article{williams-etal-2020-anlizing,
title = "ANLIzing the Adversarial Natural Language Inference Dataset",
author = "Adina Williams and
Tristan Thrush and
Douwe Kiela",
booktitle = "Proceedings of the 5th Annual Meeting of the Society for Computation in Linguistics",
year = "2022",
publisher = "Association for Computational Linguistics",
}
``` |
bias-amplified-splits/qqp | 2023-07-04T11:47:36.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:1804.07461",
"region:us"
] | bias-amplified-splits | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | null | 0 | 4 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42391456
num_examples: 297735
- name: train.anti_biased
num_bytes: 8509364
num_examples: 66111
- name: validation.biased
num_bytes: 4698206
num_examples: 32968
- name: validation.anti_biased
num_bytes: 955548
num_examples: 7462
download_size: 70726976
dataset_size: 56554574
- config_name: partial_input
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42788212
num_examples: 297735
- name: train.anti_biased
num_bytes: 8112608
num_examples: 66111
- name: validation.biased
num_bytes: 4712327
num_examples: 33084
- name: validation.anti_biased
num_bytes: 941427
num_examples: 7346
download_size: 70726976
dataset_size: 56554574
task_categories:
- text-classification
language:
- en
pretty_name: Quora Questions Pairs
---
# Dataset Card for Bias-amplified Splits for QQP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [GLUE](https://arxiv.org/abs/1804.07461)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to the Quora Question Pairs dataset (QQP), a dataset composed of question pairs where the task is to determine if the questions are paraphrases of each other (have the same meaning).
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 77.6 |
| Biased training split | 87.0 | 36.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 81.3 |
| Biased training split | 90.3 | 63.9 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/qqp", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from QQP (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 56,
"question1": "How do I buy used car in India?",
"question2": "Which used car should I buy in India?",
"label": 0
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation set)
- `question1`: a question asked on Quora
- `question2`: a question asked on Quora
- `label`: one of `0` and `1` (`not duplicate` and `duplicate`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 32968 |
| Validation - anti-biased | 7462 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 33084 |
| Validation - anti-biased | 7346 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
QQP data was released by Quora and released under the GLUE benchmark.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` |
bias-amplified-splits/wanli | 2023-07-04T10:59:59.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:2201.05955",
"region:us"
] | bias-amplified-splits | WANLI (Worker-AI Collaboration for NLI) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in MultiNLI (Williams et al., 2018) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators. | @misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
} | null | 0 | 4 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17807491
num_examples: 89402
- name: train.anti_biased
num_bytes: 2690706
num_examples: 13483
- name: test.biased
num_bytes: 865310
num_examples: 4363
- name: test.anti_biased
num_bytes: 127605
num_examples: 637
download_size: 26671494
dataset_size: 21491112
- config_name: partial_input
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17792846
num_examples: 89402
- name: train.anti_biased
num_bytes: 2705351
num_examples: 13483
- name: test.biased
num_bytes: 858069
num_examples: 4344
- name: test.anti_biased
num_bytes: 134846
num_examples: 656
download_size: 26671494
dataset_size: 21491112
task_categories:
- text-classification
language:
- en
pretty_name: WANLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [WANLI](https://arxiv.org/abs/2201.05955)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to WANLI (**W**orker-**A**I Collaboration for **NLI**), a collection of 108K English sentence pairs for the task of natural language inference (NLI). WANLI was found to be more diverse and challenging for models compared to existing NLI datasets.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 61.7 |
| Biased training split | 75.5 | 31.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 62.6 |
| Biased training split | 76.7 | 49.6 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/wanli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['test.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from WANLI, and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
### Data Fields
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4363 |
| Test - anti-biased | 637 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4344 |
| Test - anti-biased | 656 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` |
euclaise/thevault-filtered | 2023-07-04T17:24:01.000Z | [
"task_categories:text-generation",
"license:mit",
"region:us"
] | euclaise | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: license
sequence: string
- name: language
dtype: string
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: original_string
dtype: string
- name: original_docstring
dtype: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: short_docstring
dtype: string
- name: short_docstring_tokens
sequence: string
- name: comment
sequence: string
- name: parameters
list:
- name: param
dtype: string
- name: type
dtype: string
- name: docstring_params
struct:
- name: returns
list:
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: type
dtype: string
- name: raises
list:
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: type
dtype: string
- name: params
list:
- name: identifier
dtype: string
- name: type
dtype: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: default
dtype: string
- name: is_optional
dtype: bool
- name: outlier_params
list:
- name: identifier
dtype: string
- name: type
dtype: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: default
dtype: string
- name: is_optional
dtype: bool
- name: others
list:
- name: identifier
dtype: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: code_with_imports
dtype: string
- name: idxs
dtype: int64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 1555988881.6663418
num_examples: 544627
download_size: 773215769
dataset_size: 1555988881.6663418
license: mit
task_categories:
- text-generation
---
# Dataset Card for "thevault-filtered"
Filtered version of [The Vault (function)](https://huggingface.co/datasets/Fsoft-AIC/the-vault-function). Restricted only to Python, then:
- Light AST filtering for self-contained functions
- Run through CodeBERT embeddings, clustered with k-means to 1024 clusters, and then the clusters were manually skimmed for seemingly uninformative functions.
The clusters excluded and their reasons are as follows:
```
excluded = [
4, # biochem stuff? DEcompiled code
9, # Empty functions
33, # Empty functions
34, # UI stuff, just returns arguments
37, # Empty functions
40, # Empty functions
42, # Empty functions
44, # _namespace_SIO stuff
55, # Trivial, e.g. add(a, b) = a + b
66, # find_by class methods
67, # Mostly methods, seems not very informative
77, # openapi_types, returns a fixed dictionary
78, # Minimal, method stuff
83, # Locale configuration
87, # Just returns argument
101, # Incomplete
102, # Class methods
108, # openapi_types
156, # Empty functions
164, # Trivial, function aliases
168, # Class methods
172, # Empty functions
173, # Class methods
175, # Class methods
181, # Empty functions
182, # Fixed API stuff
190, # Fixed specific stuff
197, # from_dictionary class methods
198, # Empty functions
234, # Unimplemented
246, # Fixed specific stuff
277, # Empty functions
280, # Empty functions
282, # Empty functions
287, # Trivial, e.g. helloWorld()
299, # Mostly unfinished
304, # Empty functions
310, # Fixed API stuff
313, # Just modifies globals
320, # Empty functions
329, # Takes a credentials object, and runs methods on it
332, # MangoPi bot
334, # Empty
338, # namespace_SIO nonsense
339, # fn(x) = x
363, # Empty functions
370, # Empty
379, # Empty
388, # Empty
392, # Empty functions
393, # Fixed lists
409, # Fixed dictionaries
416, # Aliases to print
428, # Empty functions
437, # Empty functions
444, # Empty
454, # Mostly just calls methods on arguments
463, # Mostly just calls methods on arguments
470, # Fixed dictionaries
474, # Mostly fixed printing
465, # OpenAPI fixed dictionaries
476, # Empty
477, # Fixed dictionaries
491, # Trivial
494, # Lots of fixed string stuff
496, # Empty
511, # Empty
518, # OpenAPI
521, # Fixed API stuff
536, # Empty
540, # Fixed API stuff
553, # Empty
555, # Empty
564, # Empty
566, # Empty
568, # cls methods
573, # Mostly fixed dict stuff
574, # namespace_SO stuff, more biochem?
582, # namespace_SO stuff, more biochem?
602, # Fixed lists
608, # Mostly cls methods
617, # Mostly cls methods
629, # cls methods, fixed lists
641, # Fixed API stuff
642, # Empty
647, # Windows API stuff
648, # jupyter stuff
649, # mostly fixed dicts
652, # Empty
660, # Empty
665, # cls methods
666, # Empty
672, # Empty
680, # fixed dicts
682, # Empty
686, # Empty
687, # Fixed lists elements_sequence
692, # cls methods
693, # ASCII art
704, # Empty
709, # mqtt send message
712, # Empty
715, # Fixed data recoding
717, # Empty
722, # cls methods
725, # cls methods
734, # cls methods
737, # Empty
741, # Trivial cls methods
742, # Empty
745, # Fixed strings
752, # Empty
758, # Mostly fixed printing
768, # Empty
783, # Empty
784, # Mostly fixed dicts
802, # Fixed printing
806, # Empty
821, # Empty
824, # stuff like load_performance_win_x64_win_x64_vs2017_settings
825, # Trivial
835, # Empty
851, # Empty
862, # Empty
876, # Trivial
878, # Empty
887, # Empty
888, # Mostly fixed dicts
890, # Mostly fixed dicts
893, # Empty
898, # cls methods
899, # Fixed ['str'] stuff
906, # Auto-generated or something
912, # Empty
924, # Empty
933, # namespace_SO biochem stuff
938, # Trivial
959, # Mostly fixed printing
963, # API-specific
965, # cls methods
967, # cls methods
970, # Mostly fixed printing
971, # cls methods
972, # cls methods
973, # Empty
979, # cls methods
982, # Empty
983, # Empty
989, # cls methods
990, # API specific
1007, # API specific
1014, # Empty
]
```
MIT licensed, like the original dataset |
izumi-lab/mc4-ja | 2023-07-29T03:11:03.000Z | [
"language:ja",
"license:odc-by",
"region:us"
] | izumi-lab | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 830150253418
num_examples: 87337884
- name: validation
num_bytes: 832560244
num_examples: 87420
download_size: 298921056154
dataset_size: 830982813662
license: odc-by
language:
- ja
---
# Dataset Card for "mc4-ja"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sled-umich/SDN | 2023-08-01T01:47:31.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-nd-4.0",
"arxiv:2210.12511",
"region:us"
] | sled-umich | null | null | null | 0 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
# DOROTHIE
## Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving Agents
**[Research Paper](https://arxiv.org/abs/2210.12511) | [Github](https://github.com/sled-group/DOROTHIE) | [Huggingface](https://huggingface.co/datasets/sled-umich/DOROTHIE)**
Authored by [Ziqiao Ma](https://mars-tin.github.io/), Ben VanDerPloeg, Cristian-Paul Bara, [Yidong Huang](https://sled.eecs.umich.edu/author/yidong-huang/), Eui-In Kim, Felix Gervits, Matthew Marge, [Joyce Chai](https://web.eecs.umich.edu/~chaijy/)
DOROTHIE (Dialogue On the ROad To Handle Irregular Events) is an innovative interactive simulation platform designed to create unexpected scenarios on the fly. This tool facilitates empirical studies on situated communication with autonomous driving agents.

This dataset is the pure dialogue dataset, if you want to see the whole simulation process and download the full dataset, please visit our [Github homepage](https://github.com/sled-group/DOROTHIE) |
ssbuild/alaca_chain-of-thought | 2023-07-09T06:08:39.000Z | [
"license:apache-2.0",
"region:us"
] | ssbuild | null | null | null | 3 | 4 | ---
license: apache-2.0
---
|
GenP/Synthetic_Face_Images_Academic_Dataset | 2023-07-09T08:49:50.000Z | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"license:afl-3.0",
"region:us"
] | GenP | null | null | null | 0 | 4 | ---
license: afl-3.0
task_categories:
- image-classification
- image-segmentation
size_categories:
- 1K<n<10K
---
Academic Dataset by Generated Photos
See at https://generated.photos/datasets#research-dataset
The free dataset is made to help students and teachers with any research. It contains 10,000 photos with equal distribution of race and gender parameters.
If you need a dataset with different parameters or quantity, contact us at work.with@generated.photos.
We will appreciate it if you let us know about the research outcome!
----------------------------------------------------------
Terms of use
----------------------------------------------------------
You can use and adapt it for any research purposes, as long as you:
(a) give appropriate credit by citing in your paper,
(b) put a link to Generated Photos website in case of publishing your paper or results of your research or a related article. Example of an attribution line: Academic Dataset by Generated Photos https://generated.photos/datasets
You can redistribute it within your university, but please follow these rules:
(a) indicate any changes that you've made,
(b) make sure that your fellow student or teacher you pass this dataset is aware of the terms of use described in this file.
For more information about datasets and license, please visit Generated Photos website:
https://generated.photos/datasets
https://generated.photos/faq
https://generated.photos/terms-and-conditions
----------------------------------------------------------
Photos
----------------------------------------------------------
All the photos are 100% synthetic. Based on model-released photos. Royalty-free. Can be used for any research purpose except for the ones violating the law. Worldwide. No time limitations.
Quantity 10,000
Quality 256x256px
Diversity Ethnicity, gender
----------------------------------------------------------
Metadata
----------------------------------------------------------
The JSON files contain the metadata for each image in a machine-readable format, including:
(1) FaceLandmarks: mouth, right_eyebrow, left_eyebrow, right_eye, left_eye, nose, jaw.
(2) FaceAttributes: headPose, gender, makeup, emotion, facialHair, hair (hairColor, hairLength, bald), occlusion, ethnicity, eye_color, smile, age
----------------------------------------------------------
Please contact work.with@generated.photos for business and press inquiries and other questions. |
BAAI/SVIT | 2023-08-24T09:19:03.000Z | [
"task_categories:visual-question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"arxiv:2307.04087",
"region:us"
] | BAAI | Scale up visual instruction tuning to millions by GPT-4. | @article{zhao2023svit,
title={SVIT: Scaling up Visual Instruction Tuning},
author={Zhao, Bo and Wu, Boya and Huang, Tiejun},
journal={arXiv preprint arXiv:2307.04087},
year={2023}
} | null | 6 | 4 | ---
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >
The Beijing Academy of Artificial Intelligence (hereinafter referred to as "we" or "BAAI") provides you with an open-source dataset (hereinafter referred to as "dataset") through the SVIT HuggingFace repository (https://huggingface.co/datasets/BAAI/SVIT). You can download the dataset you need and use it for purposes such as learning, research, and business, while abiding by the usage rules of each original dataset.
Before you acquire the open-source dataset (including but not limited to accessing, downloading, copying, distributing, using, or any other handling of the dataset), you should read and understand this "SVIT Open-Source Dataset Usage Notice and Disclaimer" (hereinafter referred to as "this statement"). Once you acquire the open-source dataset, regardless of your method of acquisition, your actions will be regarded as acknowledgment of the full content of this statement.
1. Ownership and Operation Rights
You should fully understand that the ownership and operation rights of the SVIT HuggingFace repository (including the current and all previous versions) belong to BAAI. BAAI has the final interpretation and decision rights over this platform/tool and the open-source dataset plan.
You acknowledge and understand that due to updates and improvements in relevant laws and regulations and the need to fulfill our legal compliance obligations, we reserve the right to update, maintain, or even suspend or permanently terminate the services of this platform/tool from time to time. We will notify you of possible situations mentioned above in a reasonable manner such as through an announcement or email within a reasonable time. You should make corresponding adjustments and arrangements in a timely manner. However, we do not bear any responsibility for any losses caused to you by any of the aforementioned situations.
2. Claim of Rights to Open-Source Datasets
For the purpose of facilitating your dataset acquisition and use for learning, research, and business, we have performed necessary steps such as format integration, data cleaning, labeling, categorizing, annotating, and other related processing on the third-party original datasets to form the open-source datasets for this platform/tool's users.
You understand and acknowledge that we do not claim the proprietary rights of intellectual property to the open-source datasets. Therefore, we have no obligation to actively recognize and protect the potential intellectual property of the open-source datasets. However, this does not mean that we renounce the personal rights to claim credit, publication, modification, and protection of the integrity of the work (if any) of the open-source datasets. The potential intellectual property and corresponding legal rights of the original datasets belong to the original rights holders.
In addition, providing you with open-source datasets that have been reasonably arranged, processed, and handled does not mean that we acknowledge the authenticity, accuracy, or indisputability of the intellectual property and information content of the original datasets. You should filter and carefully discern the open-source datasets you choose to use. You understand and agree that BAAI does not undertake any obligation or warranty responsibility for any defects or flaws in the original datasets you choose to use.
3. Usage Restrictions for Open-Source Datasets
Your use of the dataset must not infringe on our or any third party's legal rights and interests (including but not limited to copyrights, patent rights, trademark rights, and other intellectual property and other rights).
After obtaining the open-source dataset, you should ensure that your use of the open-source dataset does not exceed the usage rules explicitly stipulated by the rights holders of the original dataset in the form of a public notice or agreement, including the range, purpose, and lawful purposes of the use of the original data. We kindly remind you here that if your use of the open-source dataset exceeds the predetermined range and purpose of the original dataset, you may face the risk of infringing on the legal rights and interests of the rights holders of the original dataset, such as intellectual property, and may bear corresponding legal responsibilities.
4. Personal Information Protection
Due to technical limitations and the public welfare nature of the open-source datasets, we cannot guarantee that the open-source datasets do not contain any personal information, and we do not bear any legal responsibility for any personal information that may be involved in the open-source datasets.
If the open-source dataset involves personal information, we do not bear any legal responsibility for any personal information processing activities you may involve when using the open-source dataset. We kindly remind you here that you should handle personal information in accordance with the provisions of the "Personal Information Protection Law" and other relevant laws and regulations.
To protect the legal rights and interests of the information subject and to fulfill possible applicable laws and administrative regulations, if you find content that involves or may involve personal information during the use of the open-source dataset, you should immediately stop using the part of the dataset that involves personal information and contact us as indicated in "6. Complaints and Notices."
5. Information Content Management
We do not bear any legal responsibility for any illegal and bad information that may be involved in the open-source dataset.
If you find that the open-source dataset involves or may involve any illegal and bad information during your use, you should immediately stop using the part of the dataset that involves illegal and bad information and contact us in a timely manner as indicated in "6. Complaints and Notices."
6. Complaints and Notices
If you believe that the open-source dataset has infringed on your legal rights and interests, you can contact us at 010-50955974, and we will handle your claims and complaints in accordance with the law in a timely manner.
To handle your claims and complaints, we may need you to provide contact information, infringement proof materials, and identity proof materials. Please note that if you maliciously complain or make false statements, you will bear all legal responsibilities caused thereby (including but not limited to reasonable compensation costs).
7. Disclaimer
You understand and agree that due to the nature of the open-source dataset, the dataset may contain data from different sources and contributors, and the authenticity, accuracy, and objectivity of the data may vary, and we cannot make any promises about the availability and reliability of any dataset.
In any case, we do not bear any legal responsibility for any risks such as personal information infringement, illegal and bad information dissemination, and intellectual property infringement that may exist in the open-source dataset.
In any case, we do not bear any legal responsibility for any loss (including but not limited to direct loss, indirect loss, and loss of potential benefits) you suffer or is related to the open-source dataset.
8. Others
The open-source dataset is in a constant state of development and change. We may update, adjust the range of the open-source dataset we provide, or suspend, pause, or terminate the open-source dataset service due to business development, third-party cooperation, changes in laws and regulations, and other reasons.
extra_gated_fields:
Name: text
Affiliation: text
Country: text
I agree to accept the license: checkbox
extra_gated_button_content: Acknowledge license
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: SVIT
size_categories:
- 1M<n<10M
---
# Dataset Card for SVIT
Scale up visual instruction tuning to millions by GPT-4.
## Dataset Description
- **Repository:** https://github.com/BAAI-DCAI/Visual-Instruction-Tuning
- **Paper:** https://arxiv.org/pdf/2307.04087.pdf
## Introduction
We Scale up Visual Instruction Tuning (SVIT) and propose a large-scale dataset with 4.2 million informative instruction tuning data, including 1.6M conversation QA pairs, 1.6M complex reasoning QA pairs, 106K detailed descriptions and 1.0M referring QA pairs, by prompting GPT-4 with the abundant manual annotations of image.
The dataset is built based on Visual Genome and MS-COCO. The original images and the annotations from Visual Genome and MS-COCO are in "raw" folder. The instructions and responses generated by GPT-4 are in "data" folder. Details about the dataset can be found in GitHub or the paper.
- GitHub: https://github.com/BAAI-DCAI/Visual-Instruction-Tuning
- Paper: https://arxiv.org/pdf/2307.04087.pdf
## License
The dataset is licensed under a Creative Commons Attribution 4.0 License.
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.
The use of original images and annotations from Visual Genome and MS-COCO should comply with the original licenses.
## Contact us
If you have any comments or questions about the dataset, feel free to create an issue in GitHub: https://github.com/BAAI-DCAI/Visual-Instruction-Tuning/issues. |
sl-alex/openai-prm800k-stepwise-critic | 2023-07-12T16:00:16.000Z | [
"license:mit",
"region:us"
] | sl-alex | null | null | null | 0 | 4 | ---
license: mit
---
Denormalized dataset created by processing OpenAI's [PRM800K](https://github.com/openai/prm800k/tree/main) process supervision dataset via [prm800k-denorm](https://github.com/scottlogic-alex/prm800k-denorm).
We include every conversation turn (i.e. "what's been said so far" + "the next step in the conversation"), good and bad. Plus the human evaluator's rating of whether it was a good or bad response.
You could use this for training a classifier.
Dataset description and usage instructions in [prm800k-denorm README](https://github.com/scottlogic-alex/prm800k-denorm/blob/main/README.md). |
VishaalY/solutions-architect-hf-dataset | 2023-07-19T15:07:34.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | VishaalY | null | null | null | 0 | 4 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: sol-set
size_categories:
- 1K<n<10K
--- |
NightMachinery/ImageNet1K-val-indexed | 2023-07-13T22:54:49.000Z | [
"region:us"
] | NightMachinery | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': n01440764
'1': n01443537
'2': n01484850
'3': n01491361
'4': n01494475
'5': n01496331
'6': n01498041
'7': n01514668
'8': n01514859
'9': n01518878
'10': n01530575
'11': n01531178
'12': n01532829
'13': n01534433
'14': n01537544
'15': n01558993
'16': n01560419
'17': n01580077
'18': n01582220
'19': n01592084
'20': n01601694
'21': n01608432
'22': n01614925
'23': n01616318
'24': n01622779
'25': n01629819
'26': n01630670
'27': n01631663
'28': n01632458
'29': n01632777
'30': n01641577
'31': n01644373
'32': n01644900
'33': n01664065
'34': n01665541
'35': n01667114
'36': n01667778
'37': n01669191
'38': n01675722
'39': n01677366
'40': n01682714
'41': n01685808
'42': n01687978
'43': n01688243
'44': n01689811
'45': n01692333
'46': n01693334
'47': n01694178
'48': n01695060
'49': n01697457
'50': n01698640
'51': n01704323
'52': n01728572
'53': n01728920
'54': n01729322
'55': n01729977
'56': n01734418
'57': n01735189
'58': n01737021
'59': n01739381
'60': n01740131
'61': n01742172
'62': n01744401
'63': n01748264
'64': n01749939
'65': n01751748
'66': n01753488
'67': n01755581
'68': n01756291
'69': n01768244
'70': n01770081
'71': n01770393
'72': n01773157
'73': n01773549
'74': n01773797
'75': n01774384
'76': n01774750
'77': n01775062
'78': n01776313
'79': n01784675
'80': n01795545
'81': n01796340
'82': n01797886
'83': n01798484
'84': n01806143
'85': n01806567
'86': n01807496
'87': n01817953
'88': n01818515
'89': n01819313
'90': n01820546
'91': n01824575
'92': n01828970
'93': n01829413
'94': n01833805
'95': n01843065
'96': n01843383
'97': n01847000
'98': n01855032
'99': n01855672
'100': n01860187
'101': n01871265
'102': n01872401
'103': n01873310
'104': n01877812
'105': n01882714
'106': n01883070
'107': n01910747
'108': n01914609
'109': n01917289
'110': n01924916
'111': n01930112
'112': n01943899
'113': n01944390
'114': n01945685
'115': n01950731
'116': n01955084
'117': n01968897
'118': n01978287
'119': n01978455
'120': n01980166
'121': n01981276
'122': n01983481
'123': n01984695
'124': n01985128
'125': n01986214
'126': n01990800
'127': n02002556
'128': n02002724
'129': n02006656
'130': n02007558
'131': n02009229
'132': n02009912
'133': n02011460
'134': n02012849
'135': n02013706
'136': n02017213
'137': n02018207
'138': n02018795
'139': n02025239
'140': n02027492
'141': n02028035
'142': n02033041
'143': n02037110
'144': n02051845
'145': n02056570
'146': n02058221
'147': n02066245
'148': n02071294
'149': n02074367
'150': n02077923
'151': n02085620
'152': n02085782
'153': n02085936
'154': n02086079
'155': n02086240
'156': n02086646
'157': n02086910
'158': n02087046
'159': n02087394
'160': n02088094
'161': n02088238
'162': n02088364
'163': n02088466
'164': n02088632
'165': n02089078
'166': n02089867
'167': n02089973
'168': n02090379
'169': n02090622
'170': n02090721
'171': n02091032
'172': n02091134
'173': n02091244
'174': n02091467
'175': n02091635
'176': n02091831
'177': n02092002
'178': n02092339
'179': n02093256
'180': n02093428
'181': n02093647
'182': n02093754
'183': n02093859
'184': n02093991
'185': n02094114
'186': n02094258
'187': n02094433
'188': n02095314
'189': n02095570
'190': n02095889
'191': n02096051
'192': n02096177
'193': n02096294
'194': n02096437
'195': n02096585
'196': n02097047
'197': n02097130
'198': n02097209
'199': n02097298
'200': n02097474
'201': n02097658
'202': n02098105
'203': n02098286
'204': n02098413
'205': n02099267
'206': n02099429
'207': n02099601
'208': n02099712
'209': n02099849
'210': n02100236
'211': n02100583
'212': n02100735
'213': n02100877
'214': n02101006
'215': n02101388
'216': n02101556
'217': n02102040
'218': n02102177
'219': n02102318
'220': n02102480
'221': n02102973
'222': n02104029
'223': n02104365
'224': n02105056
'225': n02105162
'226': n02105251
'227': n02105412
'228': n02105505
'229': n02105641
'230': n02105855
'231': n02106030
'232': n02106166
'233': n02106382
'234': n02106550
'235': n02106662
'236': n02107142
'237': n02107312
'238': n02107574
'239': n02107683
'240': n02107908
'241': n02108000
'242': n02108089
'243': n02108422
'244': n02108551
'245': n02108915
'246': n02109047
'247': n02109525
'248': n02109961
'249': n02110063
'250': n02110185
'251': n02110341
'252': n02110627
'253': n02110806
'254': n02110958
'255': n02111129
'256': n02111277
'257': n02111500
'258': n02111889
'259': n02112018
'260': n02112137
'261': n02112350
'262': n02112706
'263': n02113023
'264': n02113186
'265': n02113624
'266': n02113712
'267': n02113799
'268': n02113978
'269': n02114367
'270': n02114548
'271': n02114712
'272': n02114855
'273': n02115641
'274': n02115913
'275': n02116738
'276': n02117135
'277': n02119022
'278': n02119789
'279': n02120079
'280': n02120505
'281': n02123045
'282': n02123159
'283': n02123394
'284': n02123597
'285': n02124075
'286': n02125311
'287': n02127052
'288': n02128385
'289': n02128757
'290': n02128925
'291': n02129165
'292': n02129604
'293': n02130308
'294': n02132136
'295': n02133161
'296': n02134084
'297': n02134418
'298': n02137549
'299': n02138441
'300': n02165105
'301': n02165456
'302': n02167151
'303': n02168699
'304': n02169497
'305': n02172182
'306': n02174001
'307': n02177972
'308': n02190166
'309': n02206856
'310': n02219486
'311': n02226429
'312': n02229544
'313': n02231487
'314': n02233338
'315': n02236044
'316': n02256656
'317': n02259212
'318': n02264363
'319': n02268443
'320': n02268853
'321': n02276258
'322': n02277742
'323': n02279972
'324': n02280649
'325': n02281406
'326': n02281787
'327': n02317335
'328': n02319095
'329': n02321529
'330': n02325366
'331': n02326432
'332': n02328150
'333': n02342885
'334': n02346627
'335': n02356798
'336': n02361337
'337': n02363005
'338': n02364673
'339': n02389026
'340': n02391049
'341': n02395406
'342': n02396427
'343': n02397096
'344': n02398521
'345': n02403003
'346': n02408429
'347': n02410509
'348': n02412080
'349': n02415577
'350': n02417914
'351': n02422106
'352': n02422699
'353': n02423022
'354': n02437312
'355': n02437616
'356': n02441942
'357': n02442845
'358': n02443114
'359': n02443484
'360': n02444819
'361': n02445715
'362': n02447366
'363': n02454379
'364': n02457408
'365': n02480495
'366': n02480855
'367': n02481823
'368': n02483362
'369': n02483708
'370': n02484975
'371': n02486261
'372': n02486410
'373': n02487347
'374': n02488291
'375': n02488702
'376': n02489166
'377': n02490219
'378': n02492035
'379': n02492660
'380': n02493509
'381': n02493793
'382': n02494079
'383': n02497673
'384': n02500267
'385': n02504013
'386': n02504458
'387': n02509815
'388': n02510455
'389': n02514041
'390': n02526121
'391': n02536864
'392': n02606052
'393': n02607072
'394': n02640242
'395': n02641379
'396': n02643566
'397': n02655020
'398': n02666196
'399': n02667093
'400': n02669723
'401': n02672831
'402': n02676566
'403': n02687172
'404': n02690373
'405': n02692877
'406': n02699494
'407': n02701002
'408': n02704792
'409': n02708093
'410': n02727426
'411': n02730930
'412': n02747177
'413': n02749479
'414': n02769748
'415': n02776631
'416': n02777292
'417': n02782093
'418': n02783161
'419': n02786058
'420': n02787622
'421': n02788148
'422': n02790996
'423': n02791124
'424': n02791270
'425': n02793495
'426': n02794156
'427': n02795169
'428': n02797295
'429': n02799071
'430': n02802426
'431': n02804414
'432': n02804610
'433': n02807133
'434': n02808304
'435': n02808440
'436': n02814533
'437': n02814860
'438': n02815834
'439': n02817516
'440': n02823428
'441': n02823750
'442': n02825657
'443': n02834397
'444': n02835271
'445': n02837789
'446': n02840245
'447': n02841315
'448': n02843684
'449': n02859443
'450': n02860847
'451': n02865351
'452': n02869837
'453': n02870880
'454': n02871525
'455': n02877765
'456': n02879718
'457': n02883205
'458': n02892201
'459': n02892767
'460': n02894605
'461': n02895154
'462': n02906734
'463': n02909870
'464': n02910353
'465': n02916936
'466': n02917067
'467': n02927161
'468': n02930766
'469': n02939185
'470': n02948072
'471': n02950826
'472': n02951358
'473': n02951585
'474': n02963159
'475': n02965783
'476': n02966193
'477': n02966687
'478': n02971356
'479': n02974003
'480': n02977058
'481': n02978881
'482': n02979186
'483': n02980441
'484': n02981792
'485': n02988304
'486': n02992211
'487': n02992529
'488': n02999410
'489': n03000134
'490': n03000247
'491': n03000684
'492': n03014705
'493': n03016953
'494': n03017168
'495': n03018349
'496': n03026506
'497': n03028079
'498': n03032252
'499': n03041632
'500': n03042490
'501': n03045698
'502': n03047690
'503': n03062245
'504': n03063599
'505': n03063689
'506': n03065424
'507': n03075370
'508': n03085013
'509': n03089624
'510': n03095699
'511': n03100240
'512': n03109150
'513': n03110669
'514': n03124043
'515': n03124170
'516': n03125729
'517': n03126707
'518': n03127747
'519': n03127925
'520': n03131574
'521': n03133878
'522': n03134739
'523': n03141823
'524': n03146219
'525': n03160309
'526': n03179701
'527': n03180011
'528': n03187595
'529': n03188531
'530': n03196217
'531': n03197337
'532': n03201208
'533': n03207743
'534': n03207941
'535': n03208938
'536': n03216828
'537': n03218198
'538': n03220513
'539': n03223299
'540': n03240683
'541': n03249569
'542': n03250847
'543': n03255030
'544': n03259280
'545': n03271574
'546': n03272010
'547': n03272562
'548': n03290653
'549': n03291819
'550': n03297495
'551': n03314780
'552': n03325584
'553': n03337140
'554': n03344393
'555': n03345487
'556': n03347037
'557': n03355925
'558': n03372029
'559': n03376595
'560': n03379051
'561': n03384352
'562': n03388043
'563': n03388183
'564': n03388549
'565': n03393912
'566': n03394916
'567': n03400231
'568': n03404251
'569': n03417042
'570': n03424325
'571': n03425413
'572': n03443371
'573': n03444034
'574': n03445777
'575': n03445924
'576': n03447447
'577': n03447721
'578': n03450230
'579': n03452741
'580': n03457902
'581': n03459775
'582': n03461385
'583': n03467068
'584': n03476684
'585': n03476991
'586': n03478589
'587': n03481172
'588': n03482405
'589': n03483316
'590': n03485407
'591': n03485794
'592': n03492542
'593': n03494278
'594': n03495258
'595': n03496892
'596': n03498962
'597': n03527444
'598': n03529860
'599': n03530642
'600': n03532672
'601': n03534580
'602': n03535780
'603': n03538406
'604': n03544143
'605': n03584254
'606': n03584829
'607': n03590841
'608': n03594734
'609': n03594945
'610': n03595614
'611': n03598930
'612': n03599486
'613': n03602883
'614': n03617480
'615': n03623198
'616': n03627232
'617': n03630383
'618': n03633091
'619': n03637318
'620': n03642806
'621': n03649909
'622': n03657121
'623': n03658185
'624': n03661043
'625': n03662601
'626': n03666591
'627': n03670208
'628': n03673027
'629': n03676483
'630': n03680355
'631': n03690938
'632': n03691459
'633': n03692522
'634': n03697007
'635': n03706229
'636': n03709823
'637': n03710193
'638': n03710637
'639': n03710721
'640': n03717622
'641': n03720891
'642': n03721384
'643': n03724870
'644': n03729826
'645': n03733131
'646': n03733281
'647': n03733805
'648': n03742115
'649': n03743016
'650': n03759954
'651': n03761084
'652': n03763968
'653': n03764736
'654': n03769881
'655': n03770439
'656': n03770679
'657': n03773504
'658': n03775071
'659': n03775546
'660': n03776460
'661': n03777568
'662': n03777754
'663': n03781244
'664': n03782006
'665': n03785016
'666': n03786901
'667': n03787032
'668': n03788195
'669': n03788365
'670': n03791053
'671': n03792782
'672': n03792972
'673': n03793489
'674': n03794056
'675': n03796401
'676': n03803284
'677': n03804744
'678': n03814639
'679': n03814906
'680': n03825788
'681': n03832673
'682': n03837869
'683': n03838899
'684': n03840681
'685': n03841143
'686': n03843555
'687': n03854065
'688': n03857828
'689': n03866082
'690': n03868242
'691': n03868863
'692': n03871628
'693': n03873416
'694': n03874293
'695': n03874599
'696': n03876231
'697': n03877472
'698': n03877845
'699': n03884397
'700': n03887697
'701': n03888257
'702': n03888605
'703': n03891251
'704': n03891332
'705': n03895866
'706': n03899768
'707': n03902125
'708': n03903868
'709': n03908618
'710': n03908714
'711': n03916031
'712': n03920288
'713': n03924679
'714': n03929660
'715': n03929855
'716': n03930313
'717': n03930630
'718': n03933933
'719': n03935335
'720': n03937543
'721': n03938244
'722': n03942813
'723': n03944341
'724': n03947888
'725': n03950228
'726': n03954731
'727': n03956157
'728': n03958227
'729': n03961711
'730': n03967562
'731': n03970156
'732': n03976467
'733': n03976657
'734': n03977966
'735': n03980874
'736': n03982430
'737': n03983396
'738': n03991062
'739': n03992509
'740': n03995372
'741': n03998194
'742': n04004767
'743': n04005630
'744': n04008634
'745': n04009552
'746': n04019541
'747': n04023962
'748': n04026417
'749': n04033901
'750': n04033995
'751': n04037443
'752': n04039381
'753': n04040759
'754': n04041544
'755': n04044716
'756': n04049303
'757': n04065272
'758': n04067472
'759': n04069434
'760': n04070727
'761': n04074963
'762': n04081281
'763': n04086273
'764': n04090263
'765': n04099969
'766': n04111531
'767': n04116512
'768': n04118538
'769': n04118776
'770': n04120489
'771': n04125021
'772': n04127249
'773': n04131690
'774': n04133789
'775': n04136333
'776': n04141076
'777': n04141327
'778': n04141975
'779': n04146614
'780': n04147183
'781': n04149813
'782': n04152593
'783': n04153751
'784': n04154565
'785': n04162706
'786': n04179913
'787': n04192698
'788': n04200800
'789': n04201297
'790': n04204238
'791': n04204347
'792': n04208210
'793': n04209133
'794': n04209239
'795': n04228054
'796': n04229816
'797': n04235860
'798': n04238763
'799': n04239074
'800': n04243546
'801': n04251144
'802': n04252077
'803': n04252225
'804': n04254120
'805': n04254680
'806': n04254777
'807': n04258138
'808': n04259630
'809': n04263257
'810': n04264628
'811': n04265275
'812': n04266014
'813': n04270147
'814': n04273569
'815': n04275548
'816': n04277352
'817': n04285008
'818': n04286575
'819': n04296562
'820': n04310018
'821': n04311004
'822': n04311174
'823': n04317175
'824': n04325704
'825': n04326547
'826': n04328186
'827': n04330267
'828': n04332243
'829': n04335435
'830': n04336792
'831': n04344873
'832': n04346328
'833': n04347754
'834': n04350905
'835': n04355338
'836': n04355933
'837': n04356056
'838': n04357314
'839': n04366367
'840': n04367480
'841': n04370456
'842': n04371430
'843': n04371774
'844': n04372370
'845': n04376876
'846': n04380533
'847': n04389033
'848': n04392985
'849': n04398044
'850': n04399382
'851': n04404412
'852': n04409515
'853': n04417672
'854': n04418357
'855': n04423845
'856': n04428191
'857': n04429376
'858': n04435653
'859': n04442312
'860': n04443257
'861': n04447861
'862': n04456115
'863': n04458633
'864': n04461696
'865': n04462240
'866': n04465501
'867': n04467665
'868': n04476259
'869': n04479046
'870': n04482393
'871': n04483307
'872': n04485082
'873': n04486054
'874': n04487081
'875': n04487394
'876': n04493381
'877': n04501370
'878': n04505470
'879': n04507155
'880': n04509417
'881': n04515003
'882': n04517823
'883': n04522168
'884': n04523525
'885': n04525038
'886': n04525305
'887': n04532106
'888': n04532670
'889': n04536866
'890': n04540053
'891': n04542943
'892': n04548280
'893': n04548362
'894': n04550184
'895': n04552348
'896': n04553703
'897': n04554684
'898': n04557648
'899': n04560804
'900': n04562935
'901': n04579145
'902': n04579432
'903': n04584207
'904': n04589890
'905': n04590129
'906': n04591157
'907': n04591713
'908': n04592741
'909': n04596742
'910': n04597913
'911': n04599235
'912': n04604644
'913': n04606251
'914': n04612504
'915': n04613696
'916': n06359193
'917': n06596364
'918': n06785654
'919': n06794110
'920': n06874185
'921': n07248320
'922': n07565083
'923': n07579787
'924': n07583066
'925': n07584110
'926': n07590611
'927': n07613480
'928': n07614500
'929': n07615774
'930': n07684084
'931': n07693725
'932': n07695742
'933': n07697313
'934': n07697537
'935': n07711569
'936': n07714571
'937': n07714990
'938': n07715103
'939': n07716358
'940': n07716906
'941': n07717410
'942': n07717556
'943': n07718472
'944': n07718747
'945': n07720875
'946': n07730033
'947': n07734744
'948': n07742313
'949': n07745940
'950': n07747607
'951': n07749582
'952': n07753113
'953': n07753275
'954': n07753592
'955': n07754684
'956': n07760859
'957': n07768694
'958': n07802026
'959': n07831146
'960': n07836838
'961': n07860988
'962': n07871810
'963': n07873807
'964': n07875152
'965': n07880968
'966': n07892512
'967': n07920052
'968': n07930864
'969': n07932039
'970': n09193705
'971': n09229709
'972': n09246464
'973': n09256479
'974': n09288635
'975': n09332890
'976': n09399592
'977': n09421951
'978': n09428293
'979': n09468604
'980': n09472597
'981': n09835506
'982': n10148035
'983': n10565667
'984': n11879895
'985': n11939491
'986': n12057211
'987': n12144580
'988': n12267677
'989': n12620546
'990': n12768682
'991': n12985857
'992': n12998815
'993': n13037406
'994': n13040303
'995': n13044778
'996': n13052670
'997': n13054560
'998': n13133613
'999': n15075141
- name: id
dtype: int64
splits:
- name: train
num_bytes: 6633504145.375
num_examples: 49101
download_size: 6622641479
dataset_size: 6633504145.375
---
# Dataset Card for "ImageNet1K-val-indexed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperb/NoiseDetection_LJSpeech_MUSAN-Speech | 2023-07-18T09:10:45.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 3371932555.0
num_examples: 26200
download_size: 3362676277
dataset_size: 3371932555.0
---
# Dataset Card for "NoiseDetectionspeech_LJSpeechMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien/blbooks-parquet-embedded | 2023-07-14T14:38:08.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:other",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"sou... | davanstrien | null | null | null | 0 | 4 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets: davanstrien/blbooks-parquet
task_categories:
- text-generation
- fill-mask
- other
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: British Library Books
tags:
- embeddings
dataset_info:
- config_name: all
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30394267732
num_examples: 14011953
download_size: 10486035662
dataset_size: 30394267732
- config_name: 1800s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30020434670
num_examples: 13781747
download_size: 10348577602
dataset_size: 30020434670
- config_name: 1700s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 266382657
num_examples: 178224
download_size: 95137895
dataset_size: 266382657
- config_name: '1510_1699'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 107667469
num_examples: 51982
download_size: 42320165
dataset_size: 107667469
- config_name: '1500_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30452067039
num_examples: 14011953
download_size: 10486035662
dataset_size: 30452067039
- config_name: '1800_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30077284377
num_examples: 13781747
download_size: 10348577602
dataset_size: 30077284377
- config_name: '1700_1799'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 267117831
num_examples: 178224
download_size: 95137895
dataset_size: 267117831
---
# Dataset Card for "blbooks-parquet-embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jaimevera1107/similarity-sentences-spanish | 2023-07-24T14:11:43.000Z | [
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:es",
"license:mit",
"region:us"
] | jaimevera1107 | null | null | null | 1 | 4 | ---
license: mit
task_categories:
- sentence-similarity
language:
- es
size_categories:
- 10K<n<100K
pretty_name: SimilaritySpanishDataset
---
# similarity-sentences-spanish (SSS)
### Dataset Summary
This dataset comprises a collection of sentences generated using Chat GPT-3, covering various general topics.
The dataset also includes sentences from two existing datasets, STS-ES and STSB-Multi-MT, as well as SICK, which were used as additional sources.
The sentences in this dataset were generated to exhibit varying levels of similarity based on randomly divided prompts.
| **Source** | **Share (rows)** | **Count (rows)** | **Score (avg)** |
|-----------|-----------------|------------------|----------------|
| GPT | 22.71 % | 3982 | 0.50 |
| STBS | 49.21 % | 8628 | 0.53 |
| STS | 17.69 % | 3102 | 0.42 |
| SICK | 10.38 % | 1820 | 0.51 |
| **Total** | 100% | 17532 | 0.49 |
### Objective
The purpose of creating this dataset using Chat GPT-3 was to generate diverse text samples covering various topics and to ensure a balanced distribution of scores both overall and across different themes. By leveraging Chat GPT-3, the dataset aims to provide a wide range of sentence pairs with varying degrees of similarity for further analysis and research purposes.
### Languages
Spanish
## Dataset Structure
### Data Fields
- Sentence 1: The first sentence to be compared.
- Sentence 2: The second sentence to be compared.
- Score: A number between 0 and 1 indicating the similarity between Sentence 1 and Sentence 2, with 1 indicating high similarity.
- Source: The source of the information, represented by its abbreviation.
## Dataset Biases
This dataset inherits the biases present in the two existing datasets and the biases inherent in a text generation model like Chat GPT-3.
### Source Data
The dataset was created using the following sources:
1. Already existing datasets:
- STS-ES ([STSB](https://huggingface.co/datasets/stsb_multi_mt))
- STSB-Multi-MT ([STS](https://huggingface.co/datasets/PlanTL-GOB-ES/sts-es))
2. Newly generated data:
- Chat GPT-3: The sentences were generated using Chat GPT-3 for various general topics.
The dataset includes sentences from various themes, such as:
- Alimentación y Cocina (Food and Cooking)
- Arte y Cultura (Art and Culture)
- Ciencia y Tecnología (Science and Technology)
- Cine y Televisión (Film and Television)
- Deportes (Sports)
- Economía (Economy)
- Educación (Education)
- Estadística (Statistics)
- Filosofía (Philosophy)
- Finanzas (Finance)
- Historia (History)
- Literatura (Literature)
- Medicina (Medicine)
- Medio Ambiente y Sostenibilidad (Environment and Sustainability)
- Moda y Estilo (Fashion and Style)
- Música (Music)
- Organizacional (Organizational)
- Política y Gobierno (Politics and Government)
- Psicología (Psychology)
- Religión y Espiritualidad (Religion and Spirituality)
- Salud y Bienestar (Health and Wellness)
Please note that these themes are not exhaustive.
The prompts for each label (score) are as follows:
```python
descripciones_similaridad = {
"0.0": "Rewrite the following sentence in a new sentence about a completely different topic, without any apparent connection to the original sentence. The two sentences must be completely distinct and should not share any thematic similarity.",
"0.1": "Rewrite the following sentence in a new sentence about a topic completely different from the original sentence. Make sure the two sentences are entirely different and do not share any thematic similarity. At least 90% of the information level should change.",
"0.2": "Rewrite the following sentence in a new sentence about the same topic as the original sentence, but not an exact copy. You can express different ideas, but the general theme should be similar. Ensure at least 80% of the information level is different.",
"0.3": "Rewrite the following sentence in a new sentence about a topic related to the original sentence, though not equivalent. Both sentences must share a common theme or general idea, but they can express different viewpoints. At least 70% of the information level should change.",
"0.4": "Rewrite the following sentence in a new sentence that is not equivalent to the original, but has some similar details or elements. Ensure at least 60% of the information level is different.",
"0.5": "Rewrite the following sentence in a new sentence that is not equivalent to the original, but is related to some extent. Both sentences should have some details in common and be thematically related at least 50% of the information level.",
"0.6": "Rewrite the following sentence in a new sentence that is approximately equivalent to the original, but may differ in important information or have certain missing elements. The changes should slightly affect the meaning, and at least 60% of the information level should be preserved.",
"0.7": "Rewrite the following sentence in a new sentence that is approximately equivalent to the original, but may differ in important information or have some missing elements. Ensure at least 70% of the information level remains the same.",
"0.8": "Rewrite the following sentence in a new sentence that is mostly equivalent to the original, but may differ in some unimportant details. The changes should affect a maximum of 20% of the information level.",
"0.9": "Rewrite the following sentence in a new sentence that is nearly equivalent to the original, but may have some differences in minor details that do not significantly impact its meaning. The changes should affect a maximum of 10% of the information level.",
"1.0": "Rewrite the following sentence in a new sentence that is completely equivalent to the original, as they express exactly the same idea or meaning. The two sentences must share 100% of the information level.",
}
```
- SICK ([SICK Dataset](https://huggingface.co/datasets/sick))
The dataset also includes translated and sampled sentences from the SICK dataset using Helsinki ([helsinki - EN -ES](https://huggingface.co/datasets/sick)) as the translation tool to achieve an average score close to 0.5 with the entire dataset.
To maintain a balanced representation and avoid excessive prominence of translated data that was not originally written in Spanish and has not been reviewed in Spanish, the intention is to have scores generally centered around 0.5. |
NeuroSenko/senko-voice | 2023-07-17T04:06:40.000Z | [
"region:us"
] | NeuroSenko | null | null | null | 0 | 4 | Entry not found |
jxu9001/custom_ontonotes5 | 2023-07-20T19:08:55.000Z | [
"region:us"
] | jxu9001 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: tags
sequence: int32
splits:
- name: train
num_bytes: 3773643
num_examples: 12195
- name: validation
num_bytes: 480047
num_examples: 1553
- name: test
num_bytes: 481250
num_examples: 1573
download_size: 0
dataset_size: 4734940
---
# Dataset Card for "custom_ontonotes5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pvrancx/legobricks | 2023-07-19T17:06:06.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | pvrancx | null | null | null | 2 | 4 | ---
license: apache-2.0
task_categories:
- image-classification
pretty_name: legobricks
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10190'
'1': '10197'
'2': '10201'
'3': '10202'
'4': '10247'
'5': '10314'
'6': '10884'
'7': '10928'
'8': '11090'
'9': '11127'
'10': '11153'
'11': '11203'
'12': '11208'
'13': '11209'
'14': '11211'
'15': '11212'
'16': '11213'
'17': '11214'
'18': '11215'
'19': '11253'
'20': '11458'
'21': '11476'
'22': '11477'
'23': '11478'
'24': '11609'
'25': '11610'
'26': '11618'
'27': '11833'
'28': '11946'
'29': '11947'
'30': 122c01
'31': '12825'
'32': '13547'
'33': '13548'
'34': '13564'
'35': '13731'
'36': '13965'
'37': '13971'
'38': '14395'
'39': '14417'
'40': '14418'
'41': '14419'
'42': '14696'
'43': '14704'
'44': '14716'
'45': '14718'
'46': '14719'
'47': '14720'
'48': '14769'
'49': '15068'
'50': '15070'
'51': '15100'
'52': '15207'
'53': '15208'
'54': '15209'
'55': '15210'
'56': '15254'
'57': '15303'
'58': '15332'
'59': '15379'
'60': '15391'
'61': '15392'
'62': '15395'
'63': '15400'
'64': '15403'
'65': '15456'
'66': '15458'
'67': '15461'
'68': '15462'
'69': '15470'
'70': '15533'
'71': '15535'
'72': '15571'
'73': '15573'
'74': '15672'
'75': '15706'
'76': '15712'
'77': '16577'
'78': '16770'
'79': '17485'
'80': '18041'
'81': '18575'
'82': '18646'
'83': '18649'
'84': '18651'
'85': '18653'
'86': '18654'
'87': '18671'
'88': '18674'
'89': '18677'
'90': '18853'
'91': '18946'
'92': '18976'
'93': '18977'
'94': '18980'
'95': '19119'
'96': '19220'
'97': '20310'
'98': '20482'
'99': '21459'
'100': '2214'
'101': '22385'
'102': '22388'
'103': '22484'
'104': '22667'
'105': '22885'
'106': '22886'
'107': '22888'
'108': '22889'
'109': '22890'
'110': '22961'
'111': '2300'
'112': '2301'
'113': '2302'
'114': '2335'
'115': '2339'
'116': '2340'
'117': '2343'
'118': '23443'
'119': '2346'
'120': '2357'
'121': 2362a
'122': '2377'
'123': '23950'
'124': '23969'
'125': '24122'
'126': 2412a
'127': 2412b
'128': '2413'
'129': '2417'
'130': '2419'
'131': '2420'
'132': '24201'
'133': '2423'
'134': '24246'
'135': '24299'
'136': '24307'
'137': '24309'
'138': '2431'
'139': '24316'
'140': '2432'
'141': '2436'
'142': '2437'
'143': '24375'
'144': '2444'
'145': '2445'
'146': '2446'
'147': '2447'
'148': '2449'
'149': '2450'
'150': '24505'
'151': '2452'
'152': 2453a
'153': 2453b
'154': 2454a
'155': 2454b
'156': '2456'
'157': '2458'
'158': '2460'
'159': '2462'
'160': '2465'
'161': 2476a
'162': '2479'
'163': '24855'
'164': '2486'
'165': '24866'
'166': '2489'
'167': '2496'
'168': '25214'
'169': '25269'
'170': '2530'
'171': '2540'
'172': '2555'
'173': '2566'
'174': '2569'
'175': '2577'
'176': '25893'
'177': '26047'
'178': '2639'
'179': '2653'
'180': '2654'
'181': '2655'
'182': '26601'
'183': '26603'
'184': '26604'
'185': '2723'
'186': '27261'
'187': '27263'
'188': '273'
'189': '2730'
'190': '2736'
'191': '2744'
'192': '27507'
'193': '2780'
'194': '27925'
'195': '27940'
'196': '2815'
'197': '2817'
'198': '28192'
'199': '2825'
'200': 2850a
'201': 2850b
'202': '2851'
'203': '2852'
'204': '2853'
'205': '2854'
'206': '2877'
'207': 2878c01
'208': '28802'
'209': '28974'
'210': '2905'
'211': '29119'
'212': '29120'
'213': '2921'
'214': '2926'
'215': '30000'
'216': '3001'
'217': '3002'
'218': 30027b
'219': '30028'
'220': '3003'
'221': '30031'
'222': '3004'
'223': '30043'
'224': '30044'
'225': '30046'
'226': '3005'
'227': '30055'
'228': '3006'
'229': '3007'
'230': '3008'
'231': 30089b
'232': '3009'
'233': '30093'
'234': '30099'
'235': '3010'
'236': '3011'
'237': '30132'
'238': '30136'
'239': '30137'
'240': '30145'
'241': '30150'
'242': '30153'
'243': '30157'
'244': '30162'
'245': '30165'
'246': 30173b
'247': '30176'
'248': '3020'
'249': '3021'
'250': '3022'
'251': '3023'
'252': '30236'
'253': '3024'
'254': '3027'
'255': '3028'
'256': '30285'
'257': '3029'
'258': '3030'
'259': '3031'
'260': '3032'
'261': '3033'
'262': '3034'
'263': '30340'
'264': '3035'
'265': 30350b
'266': '30355'
'267': '30356'
'268': '30357'
'269': 30359b
'270': '3036'
'271': '30363'
'272': '30364'
'273': '30365'
'274': 30367b
'275': 30367c
'276': '3037'
'277': '30374'
'278': '30377'
'279': '3038'
'280': '30383'
'281': '30385'
'282': '30386'
'283': '3039'
'284': '30391'
'285': '30395'
'286': 3040a
'287': 3040b
'288': '3041'
'289': '30414'
'290': '3043'
'291': 3044c
'292': '3045'
'293': 3049d
'294': '30503'
'295': '30504'
'296': '30526'
'297': '30552'
'298': '30553'
'299': 30554b
'300': '30562'
'301': '30565'
'302': '30586'
'303': '30592'
'304': '30602'
'305': 3062a
'306': 3062b
'307': 3063b
'308': '30648'
'309': '3065'
'310': '30663'
'311': 3068a
'312': 3068b
'313': 3069a
'314': 3069b
'315': 3070b
'316': 3081bc01
'317': 3081cc01
'318': '31000'
'319': '31110'
'320': 3137c01
'321': '3139'
'322': '3176'
'323': '3184'
'324': '3185'
'325': '32000'
'326': '32001'
'327': '32002'
'328': '32009'
'329': '32013'
'330': '32014'
'331': '32015'
'332': '32016'
'333': '32017'
'334': '32018'
'335': '32028'
'336': '32034'
'337': '32039'
'338': '32054'
'339': '32056'
'340': '32059'
'341': '32062'
'342': '32063'
'343': 32064a
'344': 32064b
'345': '32065'
'346': '32072'
'347': '32073'
'348': 32123a
'349': 32123b
'350': '32124'
'351': '32126'
'352': '32138'
'353': '32140'
'354': '32174'
'355': '32184'
'356': '32187'
'357': '32192'
'358': '32198'
'359': '32200'
'360': '32209'
'361': '32211'
'362': '32249'
'363': '32250'
'364': '32269'
'365': '32270'
'366': '32271'
'367': '32278'
'368': 3228a
'369': '32291'
'370': 3229a
'371': 3230a
'372': '32316'
'373': '32324'
'374': '32348'
'375': '32449'
'376': 3245b
'377': 3245c
'378': '32474'
'379': '32523'
'380': '32524'
'381': '32525'
'382': '32526'
'383': '32529'
'384': '32530'
'385': '32531'
'386': '32532'
'387': '32555'
'388': '32556'
'389': '32557'
'390': '32606'
'391': '32607'
'392': '32803'
'393': '32828'
'394': '32952'
'395': '3297'
'396': '3298'
'397': '3299'
'398': '3300'
'399': '33051'
'400': '3307'
'401': '33078'
'402': '3308'
'403': '33085'
'404': '33172'
'405': '33183'
'406': '33243'
'407': '33286'
'408': '33291'
'409': 33299a
'410': 33299b
'411': '33303'
'412': '33320'
'413': '33909'
'414': '34103'
'415': '34337'
'416': '3437'
'417': '3455'
'418': '3456'
'419': '3460'
'420': '3464'
'421': 3475b
'422': '34816'
'423': '3482'
'424': '3483'
'425': '35044'
'426': '35459'
'427': '35464'
'428': '35480'
'429': '35787'
'430': '3581'
'431': '3582'
'432': '3612'
'433': '3613'
'434': '3622'
'435': '3623'
'436': '3624'
'437': 3626b
'438': 3626c
'439': '3633'
'440': '3634'
'441': '3641'
'442': '3647'
'443': 3648a
'444': 3648b
'445': '3649'
'446': 3650c
'447': '3651'
'448': '3659'
'449': '3660'
'450': '3665'
'451': '3666'
'452': '3673'
'453': '3675'
'454': 36752a
'455': '3676'
'456': 3678b
'457': '3679'
'458': '3680'
'459': '3684'
'460': '36840'
'461': '36841'
'462': '3685'
'463': '3700'
'464': '3701'
'465': '3702'
'466': '3703'
'467': '3704'
'468': '3705'
'469': '3706'
'470': '3707'
'471': '3708'
'472': '3709'
'473': '3710'
'474': '3713'
'475': '37352'
'476': '3737'
'477': '3738'
'478': '3741'
'479': '3742'
'480': '3743'
'481': 3747a
'482': 3747b
'483': '3749'
'484': '37695'
'485': '37762'
'486': '37775'
'487': '3788'
'488': 3794a
'489': 3794b
'490': '3795'
'491': '3821'
'492': '3822'
'493': '3823'
'494': 3829c01
'495': '3830'
'496': '3831'
'497': '3832'
'498': '38320'
'499': '3833'
'500': '3835'
'501': '3836'
'502': '3837'
'503': 3839b
'504': '3849'
'505': '3853'
'506': '3854'
'507': '3856'
'508': '3857'
'509': 3861b
'510': '3873'
'511': '3894'
'512': '3895'
'513': '3899'
'514': '3900'
'515': '3901'
'516': '3937'
'517': '3938'
'518': '3941'
'519': 3942c
'520': 3943b
'521': '3956'
'522': 3957a
'523': 3957b
'524': '3958'
'525': '3959'
'526': '3960'
'527': 3962b
'528': '3963'
'529': '39739'
'530': '39789'
'531': '39793'
'532': '4006'
'533': '4019'
'534': '4022'
'535': 4032a
'536': '4033'
'537': '4034'
'538': '40378'
'539': '40379'
'540': '40490'
'541': '40666'
'542': '4070'
'543': '4079'
'544': 4081b
'545': '4083'
'546': '4084'
'547': 4085b
'548': 4085c
'549': '4095'
'550': '41239'
'551': '4132'
'552': '4133'
'553': '4143'
'554': '4150'
'555': '41531'
'556': '41532'
'557': '41539'
'558': '4161'
'559': '4162'
'560': '4166'
'561': '41669'
'562': '41677'
'563': '41678'
'564': '41682'
'565': '41740'
'566': '41747'
'567': '41748'
'568': '4175'
'569': '4176'
'570': '41767'
'571': '41768'
'572': '41769'
'573': '41770'
'574': '4185'
'575': '41854'
'576': '41862'
'577': 41879a
'578': '4199'
'579': '42003'
'580': '42022'
'581': '42023'
'582': '4213'
'583': 4215b
'584': '4216'
'585': '4218'
'586': '42446'
'587': '42610'
'588': 4265a
'589': 4265b
'590': 4273b
'591': '4274'
'592': 4275b
'593': 4276b
'594': '4282'
'595': 4285b
'596': '4286'
'597': 4287a
'598': 4287b
'599': 4287c
'600': '42924'
'601': '43093'
'602': '4315'
'603': '43337'
'604': 4345b
'605': '4346'
'606': '4349'
'607': '43710'
'608': '43711'
'609': '43712'
'610': '43713'
'611': '43719'
'612': '43722'
'613': '43723'
'614': '43857'
'615': '43888'
'616': '43898'
'617': '44126'
'618': '44294'
'619': '44300'
'620': 44301a
'621': 44301b
'622': 44302a
'623': '44309'
'624': 44375b
'625': '4445'
'626': '4449'
'627': '44524'
'628': 44567a
'629': 44567b
'630': '44568'
'631': '44570'
'632': '4459'
'633': 4460a
'634': 4460b
'635': '44674'
'636': '44676'
'637': '44728'
'638': '4477'
'639': '44809'
'640': '4485'
'641': '44861'
'642': '44874'
'643': '4488'
'644': '4490'
'645': 4495a
'646': 4495b
'647': '4497'
'648': '4510'
'649': '4515'
'650': '4519'
'651': '4522'
'652': '4528'
'653': '4531'
'654': '4532'
'655': '4533'
'656': '4536'
'657': '45590'
'658': '45677'
'659': '458'
'660': '4588'
'661': '4589'
'662': '4590'
'663': '4595'
'664': 4599a
'665': 4599b
'666': '4600'
'667': '46212'
'668': '4623'
'669': '4624'
'670': '4625'
'671': '4672'
'672': 4697b
'673': '4716'
'674': '4727'
'675': '4728'
'676': '4733'
'677': '4735'
'678': 4738a
'679': '47397'
'680': '47398'
'681': 4739a
'682': '4740'
'683': '47455'
'684': '47456'
'685': '47457'
'686': '47458'
'687': '47753'
'688': '47755'
'689': '47847'
'690': '47905'
'691': '48092'
'692': '48169'
'693': '48170'
'694': '48171'
'695': '48336'
'696': '4854'
'697': '4855'
'698': '4859'
'699': '4862'
'700': 4864a
'701': 4864b
'702': 4865a
'703': 4865b
'704': '4870'
'705': '4871'
'706': 48729a
'707': 48729b
'708': '48989'
'709': '49307'
'710': '49668'
'711': '50254'
'712': '50304'
'713': '50305'
'714': '50745'
'715': '50861'
'716': '50862'
'717': '50923'
'718': '50943'
'719': '50950'
'720': '50951'
'721': '51739'
'722': '52031'
'723': '52107'
'724': '52501'
'725': '53400'
'726': '53451'
'727': '53585'
'728': '53989'
'729': '54200'
'730': '54383'
'731': '54384'
'732': '54657'
'733': '54821'
'734': '55013'
'735': '55236'
'736': '55615'
'737': '55981'
'738': '55982'
'739': '56145'
'740': '56902'
'741': '57518'
'742': '57585'
'743': '57878'
'744': '57895'
'745': '58090'
'746': '58176'
'747': '58247'
'748': '59230'
'749': '59275'
'750': '59349'
'751': '59426'
'752': '59443'
'753': '59895'
'754': '59900'
'755': '6003'
'756': '60032'
'757': '6005'
'758': '6015'
'759': '60169'
'760': '60176'
'761': '6019'
'762': '6020'
'763': '60208'
'764': '60212'
'765': '60219'
'766': '6041'
'767': 60470a
'768': 60470b
'769': '60471'
'770': '60474'
'771': 60475a
'772': 60475b
'773': '60476'
'774': '60477'
'775': '60478'
'776': '60479'
'777': '60481'
'778': '60483'
'779': '60484'
'780': '60485'
'781': '60581'
'782': 60583b
'783': '60592'
'784': '60593'
'785': '60594'
'786': '60596'
'787': '6060'
'788': '60601'
'789': '60602'
'790': '60603'
'791': '60607'
'792': '60608'
'793': 60616b
'794': '60623'
'795': '6064'
'796': '60700'
'797': '6081'
'798': '60849'
'799': '60897'
'800': '6091'
'801': '6106'
'802': '61072'
'803': '6111'
'804': '6112'
'805': '61184'
'806': '61252'
'807': '61254'
'808': 6126a
'809': 6126b
'810': '61332'
'811': '6134'
'812': '61345'
'813': '6140'
'814': '61409'
'815': '6141'
'816': '6148'
'817': '61482'
'818': '61485'
'819': '6157'
'820': '61678'
'821': '61780'
'822': '6179'
'823': '6180'
'824': '6182'
'825': '6183'
'826': '6187'
'827': '6190'
'828': '61903'
'829': '6191'
'830': '6192'
'831': '62113'
'832': '6215'
'833': '6222'
'834': '6223'
'835': '6231'
'836': '6232'
'837': '6233'
'838': '62361'
'839': '6239'
'840': '62462'
'841': '6248'
'842': '6249'
'843': '62531'
'844': '6254'
'845': '6256'
'846': '6259'
'847': '6266'
'848': '62810'
'849': '63082'
'850': '6378'
'851': '63864'
'852': '63868'
'853': '63869'
'854': '63965'
'855': '64179'
'856': '64225'
'857': '64448'
'858': '64570'
'859': '64644'
'860': '64647'
'861': '64648'
'862': '64727'
'863': '6474'
'864': '64782'
'865': '64799'
'866': '6510'
'867': '6536'
'868': 6538b
'869': '6541'
'870': '65487'
'871': '65509'
'872': '6553'
'873': '65578'
'874': '6558'
'875': '6564'
'876': '6565'
'877': '6575'
'878': '6583'
'879': '6587'
'880': '6589'
'881': '6628'
'882': '6629'
'883': '6632'
'884': '6636'
'885': '66792'
'886': '66906'
'887': '67329'
'888': '69729'
'889': '7039'
'890': '72454'
'891': '73092'
'892': '73230'
'893': '73825'
'894': '74261'
'895': '74967'
'896': '75535'
'897': '75937'
'898': '76371'
'899': '76766'
'900': '78258'
'901': '78329'
'902': '79389'
'903': '85080'
'904': '85543'
'905': '85544'
'906': '85861'
'907': '85941'
'908': '85943'
'909': '85975'
'910': '85984'
'911': '86035'
'912': '86996'
'913': '87079'
'914': '87081'
'915': '87082'
'916': '87083'
'917': '87087'
'918': '87414'
'919': '87544'
'920': '87552'
'921': '87580'
'922': '87609'
'923': '87617'
'924': '87618'
'925': '87620'
'926': '87697'
'927': '87747'
'928': '87994'
'929': '88072'
'930': '88292'
'931': '88293'
'932': '88323'
'933': '88393'
'934': '88646'
'935': '88930'
'936': '89201'
'937': '89522'
'938': '89678'
'939': '90194'
'940': '90195'
'941': '90258'
'942': '90398'
'943': '90609'
'944': '90617'
'945': '90640'
'946': '90641'
'947': '91405'
'948': '91501'
'949': '91988'
'950': '92013'
'951': '92099'
'952': '92220'
'953': '92280'
'954': '92402'
'955': '92409'
'956': '92410'
'957': '92438'
'958': '9244'
'959': '92582'
'960': '92593'
'961': '92690'
'962': '92692'
'963': '92738'
'964': '92851'
'965': '92907'
'966': '92946'
'967': '92947'
'968': '92950'
'969': '93061'
'970': '93095'
'971': '93160'
'972': '93273'
'973': '93274'
'974': '93555'
'975': '93594'
'976': '93606'
'977': '93609'
'978': '94925'
'979': '95344'
'980': '96874'
'981': '98100'
'982': '98138'
'983': '98139'
'984': '98223'
'985': '98233'
'986': '98282'
'987': '98283'
'988': '98313'
'989': '98585'
'990': '98721'
'991': '98834'
'992': '99008'
'993': '99021'
'994': '99206'
'995': '99207'
'996': '99563'
'997': '99773'
'998': '99780'
'999': '99781'
splits:
- name: train
num_bytes: 25066440000.0
num_examples: 400000
download_size: 13152000872
dataset_size: 25066440000.0
---
# Dataset Card for LegoBricks
### Dataset Summary
3D images of LEGO Parts. Dataset contains the 1000 most common LEGO parts (according to the [rebrickable database](https://rebrickable.com/help/lego-database/)).
Each part has 400 images of different rotation angles and colors. Colors are sampled randomly, weighted by number of occurences for that part and color in the database.
The dataset contains a train split with 1000 classes, each represented by 400 images.
Class names are the LEGO part IDs. These ids can be used to reference the part on [BrickLink](https://www.bricklink.com/) or [Rebrickable](https://rebrickable.com)
Note that identical parts can be present under multipe IDs, due to mold updates by LEGO.
Alternative IDs can be found on Bricklink.
## Dataset Creation
Parts IDs and statistics were extracted from [rebrickable](https://rebrickable.com/) database. Images generated using [ldraw](https://www.ldraw.org/).
This dataset is not created or endorsed by LEGO. LEGO® is a trademark of the LEGO Group of companies
|
mriosqu/landing_pages_dataset | 2023-08-09T19:57:32.000Z | [
"region:us"
] | mriosqu | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 66571452.0
num_examples: 67
download_size: 64024938
dataset_size: 66571452.0
---
# Dataset Card for "landing_pages_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nos1de/vulnerable-functions | 2023-07-20T11:56:35.000Z | [
"region:us"
] | nos1de | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: sha
dtype: string
- name: remote_url
dtype: string
- name: labels
dtype:
class_label:
names:
'0': vulnerable
'1': not_vulnerable
- name: commit_msg
dtype: string
- name: function
dtype: string
splits:
- name: train
num_bytes: 21681861
num_examples: 7240
download_size: 8393520
dataset_size: 21681861
---
# Dataset Card for "vulnerable-functions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/MMBench_dev | 2023-08-23T13:39:36.000Z | [
"arxiv:2307.06281",
"region:us"
] | HuggingFaceM4 | null | null | null | 3 | 4 | ---
dataset_info:
features:
- name: question
dtype: string
- name: hint
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: label
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: image
dtype: image
splits:
- name: train
num_bytes: 102942038.498
num_examples: 4377
download_size: 99866501
dataset_size: 102942038.498
---
# Dataset Card for "MMBench_dev"
## Dataset Description
* **Homepage**: https://opencompass.org.cn/mmbench
* **Repository**: https://github.com/internLM/OpenCompass/
* **Paper**: https://arxiv.org/abs/2307.06281
* **Leaderboard**: https://opencompass.org.cn/leaderboard-multimodal
* **Point of Contact**: opencompass@pjlab.org.cn
### Dataset Summary
In recent years, the field has seen a surge in the development of numerous vision-language (VL) models, such as MiniGPT-4 and LLaVA. These models showcase promising performance in tackling previously challenging tasks. However, effectively evaluating these models' performance has become a primary challenge hindering further advancement in large VL models. Traditional benchmarks like VQAv2 and COCO Caption are widely used to provide quantitative evaluations for VL models but suffer from several shortcomings:
Dataset Construction: Dataset Construction: Traditional benchmarks tend to evaluate models based on their performance in various tasks, such as image captioning and visual question answering. Unfortunately, these tasks do not fully capture the fine-grained abilities that a model possesses, potentially impeding future optimization efforts.
Evaluation Metrics: Existing evaluation metrics lack robustness. For example, VQAv2 targets a single word or phrase, while many current VL models generate sentences as outputs. Although these sentences may correctly answer the corresponding questions, the existing evaluation metric would assign a Fail score due to an inability to exactly match the given answer. Moreover, recently proposed subjective evaluation metrics, such as that used in mPLUG-Owl, offer comprehensive evaluation of VL models. However, these metrics struggle to scale smoothly due to the significant amount of human labor required for evaluation. Additionally, these evaluations are highly biased and difficult to reproduce.
To address these limitations, we propose a novel approach by defining a set of fine-grained abilities and collecting relevant questions for each ability. We also introduce innovative evaluation strategies to ensure more robust assessment of model predictions. This new benchmark, called MMBench, boasts the following features:
Data Collection: To date, we have gathered approximately 3000 questions spanning 20 ability dimensions. Each question is a multiple-choice format with a single correct answer.
Evaluation: For a more reliable evaluation, we employ ChatGPT to match a model's prediction with the choices of a question, and then output the corresponding label (A, B, C, D) as the final prediction.
### Languages
All of our questions are presented in single-choice question format, with the number of options ranging from 2 to 4. In addition, all these questions, options, and answers are in English.
## Dataset Structure
### Data Instances
We provide a overview of an instance in MMBench as follows:
```text
{
'index': 241,
'question': 'Identify the question that Madelyn and Tucker's experiment can best answer.',
'hint': 'The passage below describes an experiment. Read the passage and then follow the
instructions below.\n\nMadelyn applied a thin layer of wax to the underside of her
snowboard and rode the board straight down a hill. Then, she removed the wax and rode
the snowboard straight down the hill again. She repeated the rides four more times,
alternating whether she rode with a thin layer of wax on the board or not. Her friend
Tucker timed each ride. Madelyn and Tucker calculated the average time it took to slide
straight down the hill on the snowboard with wax compared to the average time on the
snowboard without wax.\nFigure: snowboarding down a hill.'
'A': 'Does Madelyn's snowboard slide down a hill in less time when it has a thin layer of wax or
a thick layer of wax?'
'B': 'Does Madelyn's snowboard slide down a hill in less time when it has a layer of wax or
when it does not have a layer of wax?'
'image': xxxxxx,
'category': 'identity_reasoning',
'l2-category': 'attribute_reasoning',
'split': 'dev',
'source': 'scienceqa',
}
```
### Data Fields
* `index`: the index of the instance in the dataset.
* `question`: the question of the instance.
* `hint (optional)`: the hint of the instance.
* `A`: the first option of the instance.
* `B`: the second option of the instance.
* `C (optional)`: the third option of the instance.
* `D (optional)`: the fourth option of the instance.
* `image`: the raw image of the instance.
* `category`: the leaf category of the instance.
* `l2-category`: the L-2 category of the instance.
* `split`: the split of the instance.
* `source`: the source of the instance comes from.
### Data Splits
Currently, MMBench contains 2974 instances in total, and is splitted into **dev** and **test** splits according to a 4:6 ratio.
## Additional Information
### Citation Information
```
@article{MMBench,
author = {Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhnag, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin},
journal = {arXiv:2307.06281},
title = {MMBench: Is Your Multi-modal Model an All-around Player?},
year = {2023},
}
``` |
crumb/Open-Orca-k8 | 2023-07-21T07:29:06.000Z | [
"region:us"
] | crumb | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 1796489136
num_examples: 994896
download_size: 1022896633
dataset_size: 1796489136
---
# Dataset Card for "Open-Orca-k8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andersonbcdefg/chemistry | 2023-07-21T01:24:18.000Z | [
"region:us"
] | andersonbcdefg | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 47000178
num_examples: 20000
download_size: 21669458
dataset_size: 47000178
---
# Dataset Card for "chemistry"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mpiquero/prompts | 2023-07-21T12:37:48.000Z | [
"region:us"
] | mpiquero | null | null | null | 0 | 4 | Entry not found |
projecte-aina/ceil | 2023-09-13T12:29:55.000Z | [
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:mit",
"region:us"
] | projecte-aina | CEIL (Catalan Entity Identification and Linking).
This is a dataset for complex Named Eentity Reacognition (NER) created by the AINA project in the BSC for
Machine Learning and Language Model evaluation purposes.
CEIL corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by BSC as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB). | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- mit
multilinguality:
- monolingual
pretty_name: ceil
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for CEIL
## Dataset Description
- **Website:** https://aina.bsc.es
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es)
### Dataset Summary
(Catalan Entity Identification and Linking).This is a dataset for complex Named Entity Recognition (NER) created by the AINA project in the BSC for Machine Learning and Language Model evaluation purposes in Catalan. It contains 9 main types and 52 subtypes on all kinds of short texts, with almost 59K documents.
[CEIL corpus] is used under [CC-by](https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by [BSC LangTech Unit](https://langtech.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).

### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
l' O
obra O
de O
Galileu B-person-scholar/scientist
, O
i O
de O
la O
multiplicació O
de O
les O
acadèmies O
científiques O
, O
com O
l' O
Accademia B-organization-education
dei I-organization-education
Lincei I-organization-education
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
80/20 Train and development sets, balanced for all NERC tags. Test set incloudes documents that contain overall all the possible types in the corpus.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan.
### Source Data
Documents were gathered from various online sources:
- Tweets about different topics, such as catalan independence, coronavirus, benidormfest, vaccines, etc.
- Newswire from nacio digital (Motor) Vilaweb (opinion pieces), Agencia Catalana de Noticies (Economy and Memoria Histórica)
- Various threads from RacoCatalá forum
- Viquipedia articles (woman bios, film synopses, etc.)
- OTHER: Parliament proceedings, restaurant online reviews, etc.
#### Initial Data Collection and Normalization
The word tokenization used to convert offset annotations into CONLL files was done using spacy
#### Who are the source language producers?
Annotation Subcontracted to M47Labs.
Guidelines available at [Zenodo] (https://doi.org/10.5281/zenodo.8318188)
### Annotations
#### Annotation process
We adapted the NER labels from to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
### Citation Information
```
```
### Contributions
[N/A]
| |
seaurkin/facial_exrpressions | 2023-07-22T15:25:53.000Z | [
"license:mit",
"region:us"
] | seaurkin | null | null | null | 1 | 4 | ---
license: mit
---
# Dataset Card for Facial Expression
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset was created manually to train models for expression detection. The available Action Units are: smile, kiss, frowning brows, raised brows, open mouth and neutral.
|
rdpahalavan/packet-tag-explanation | 2023-07-22T22:14:56.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"region:us"
] | rdpahalavan | null | null | null | 0 | 4 | ---
license: apache-2.0
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
size_categories:
- 100K<n<1M
language:
- en
---
This dataset contains the packet information and the tags and their corresponding explanation. For more information, [visit here](https://github.com/rdpahalavan/nids-transformers). |
dim/mt_bench_ru | 2023-07-25T13:19:39.000Z | [
"region:us"
] | dim | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turns
sequence: string
- name: turns_ru
sequence: string
splits:
- name: train
num_bytes: 95817
num_examples: 80
download_size: 55916
dataset_size: 95817
---
# Dataset Card for "mt_bench_ru"
Автоматически переведенный датасет при помощи facebook/wmt21-dense-24-wide-en-x и потом поправленный мной лично в некоторых местах.
Если вы хотите исправить данный датасет, то вы можете использовать данную гугл таблицу https://docs.google.com/spreadsheets/d/1C2znaufnvMU2PyqaDKMTrRKPvS60xtisdcRSlqQGUUs/edit?usp=sharing |
FreedomIntelligence/MMLU_Arabic | 2023-08-06T08:03:32.000Z | [
"language:ar",
"license:mit",
"region:us"
] | FreedomIntelligence | null | null | null | 0 | 4 | ---
license: mit
language:
- ar
---
Arabic version of MMLU dataset translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
RogerB/kin_en_DigitalUmuganda | 2023-07-24T16:23:50.000Z | [
"region:us"
] | RogerB | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: rw
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 4550456
num_examples: 47824
download_size: 2836819
dataset_size: 4550456
---
# Dataset Card for "kin_en_DigitalUmuganda"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Information
The dataset was created by [DigitalUmuganda](https://huggingface.co/datasets/DigitalUmuganda/kinyarwanda-english-machine-translation-dataset/tree/main) for machine translation from Kinyarwanda to English |
fedryanto/UnibQuADV2 | 2023-08-18T14:20:43.000Z | [
"region:us"
] | fedryanto | null | 0 | 4 | Entry not found | ||
leonardPKU/orca_flan_split_task | 2023-07-25T13:53:46.000Z | [
"region:us"
] | leonardPKU | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 2438766275
num_examples: 1649259
download_size: 1351527573
dataset_size: 2438766275
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "orca_flan_split_task"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jeffnyman/rotten_tomatoes_reviews | 2023-07-25T16:16:20.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"license:cc-by-sa-4.0",
"region:us"
] | jeffnyman | Movie Review Dataset.
This is a dataset containing 4,265 positive and 4,265 negative processed
sentences from Rotten Tomatoes movie reviews. | @InProceedings{Pang+Lee:05a,
author = {Bo Pang and Lillian Lee},
title = {Seeing stars: Exploiting class relationships for sentiment
categorization with respect to rating scales},
booktitle = {Proceedings of the ACL},
year = 2005
} | null | 0 | 4 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for "rotten_tomatoes_reviews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [http://www.cs.cornell.edu/people/pabo/movie-review-data/](http://www.cs.cornell.edu/people/pabo/movie-review-data/)
- **Paper:** [https://arxiv.org/abs/cs/0506075](https://arxiv.org/abs/cs/0506075)
### Dataset Summary
Movie Review Dataset.
This is a dataset containing 4,265 positive and 4,265 negative processed
sentences from Rotten Tomatoes movie reviews.
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8530| 1066|1066|
## Additional Information
### Citation Information
```
@InProceedings{Pang+Lee:05a,
author = {Bo Pang and Lillian Lee},
title = {Seeing stars: Exploiting class relationships for sentiment
categorization with respect to rating scales},
booktitle = {Proceedings of the ACL},
year = 2005
}
```
|
baebee/merged-pf | 2023-07-25T17:10:28.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | baebee | null | null | null | 0 | 4 | ---
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: merged-pf
size_categories:
- 10K<n<100K
--- |
DynamicSuperb/AccentClassification_AccentdbExtended | 2023-07-26T05:18:30.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 17187452734.084
num_examples: 17313
download_size: 5693971728
dataset_size: 17187452734.084
---
# Dataset Card for "accent_classification_accentdb_extended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hac541309/basic_korean_dict | 2023-07-26T12:28:43.000Z | [
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ko",
"language:mn",
"language:vi",
"language:th",
"language:id",
"language:ru",
"language:ja",
"la... | hac541309 | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 198591964
num_examples: 74936
download_size: 88466367
dataset_size: 198591964
license: cc-by-sa-3.0
task_categories:
- table-question-answering
- text-generation
- text-classification
- question-answering
language:
- ko
- mn
- vi
- th
- id
- ru
- ja
- en
- fr
- es
- ar
- zh
pretty_name: 한국어기초사전
size_categories:
- 1M<n<10M
tags:
- dictionary
---
# Dataset Card for "basic_korean_dict"
This dataset is a NLP learnable form of [Korean Basic Dictionary(한국어기초사전)](https://krdict.korean.go.kr/).
It follows the [original copyright policy (cc-by-sa-2.0)](https://krdict.korean.go.kr/kboardPolicy/copyRightTermsInfo)
Some words have usage examples in other languages, effectively rendering this into a parallel corpus.
This version is built from xls_20230601
[한국어 기초 사전](https://krdict.korean.go.kr/)을 학습 가능한 형태로 처리한 데이터입니다.
[한국어 기초 사전](https://krdict.korean.go.kr/kboardPolicy/copyRightTermsInfo)의 저작권을 따릅니다.
여러 언어로 이루어진 표제어들이 있어 병렬 말뭉치의 기능이 있습니다.
xls_20230601으로부터 생성되었습니다. |
asoria/copy_beans | 2023-07-26T15:55:27.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | asoria | Beans is a dataset of images of beans taken in the field using smartphone
cameras. It consists of 3 classes: 2 disease classes and the healthy class.
Diseases depicted include Angular Leaf Spot and Bean Rust. Data was annotated
by experts from the National Crops Resources Research Institute (NaCRRI) in
Uganda and collected by the Makerere AI research lab. | @ONLINE {beansdata,
author="Makerere AI Lab",
title="Bean disease dataset",
month="January",
year="2020",
url="https://github.com/AI-Lab-Makerere/ibean/"
} | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
pretty_name: Beans
dataset_info:
features:
- name: image_file_path
dtype: string
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': angular_leaf_spot
'1': bean_rust
'2': healthy
splits:
- name: train
num_bytes: 382110
num_examples: 1034
- name: validation
num_bytes: 49711
num_examples: 133
- name: test
num_bytes: 46584
num_examples: 128
download_size: 180024906
dataset_size: 478405
---
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Beans Homepage](https://github.com/AI-Lab-Makerere/ibean/)
- **Repository:** [AI-Lab-Makerere/ibean](https://github.com/AI-Lab-Makerere/ibean/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/0aaa78294d4bf5114f58547e48d91b7826649919505379a167decb629aa92b0a/train/bean_rust/bean_rust_train.109.jpg',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image_file_path`: a `string` filepath to an image.
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"angular_leaf_spot": 0,
"bean_rust": 1,
"healthy": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1034 |133 |128 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Makerere AI Lab",
title="Bean disease dataset",
month="January",
year="2020",
url="https://github.com/AI-Lab-Makerere/ibean/"
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset. |
ninoscherrer/moralchoice | 2023-07-26T20:51:43.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | ninoscherrer | TBA | TBA | null | 5 | 4 | ---
pretty_name: MoralChoice
license: cc-by-4.0
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for MoralChoice
- **Homepage:** Coming Soon
- **Paper:** Coming soon
- **Repository:** [https://github.com/ninodimontalcino/moralchoice](https://github.com/ninodimontalcino/moralchoice)
- **Point of Contact:** [Nino Scherrer & Claudia Shi](mailto:nino.scherrer@gmail.com,claudia.j.shi@gmail.com?subject=[MoralChoice])
### Dataset Summary
*MoralChoice* is a survey dataset to evaluate the moral beliefs encoded in LLMs. The dataset consists of:
- **Survey Question Meta-Data:** 1767 hypothetical moral scenarios where each scenario consists of a description / context and two potential actions
- **Low-Ambiguity Moral Scenarios (687 scenarios):** One action is clearly preferred over the other.
- **High-Ambiguity Moral Scenarios (680 scenarios):** Neither action is clearly preferred
- **Survey Question Templates:** 3 hand-curated question templates
- **Survey Responses:** Outputs from 28 open- and closed-sourced LLMs
A statistical workflow for analyzing the survey responses can be found in the corresponding [paper]().
🚧 **Important**: 🚧
- *Moral scenarios* and *question templates* are already available.
- *Survey responses* will be uploaded shortly!
### Languages
*MoralChoice* is only available in English.
## Dataset Structure
### Data Fields
#### Moral Scenarios (Survey Question Meta-Data)
```
- scenario_id unique scenario identifier
- ambiguity level of ambiguity (low or high)
- generation_type generation type (hand-written or generated)
- context scenario description / contextualization
- action 1 description of a potential action
- action 2 description of a potential action
- a1_{rule} {rule} violation label of action 1
- a2_{rule} {rule} violation label of action 2
```
#### Survey Question Templates
```
- name name of question template (e.g., ab, repeat, compare)
- question_header question instruction header text
- question question template with placeholders
```
#### Survey Responses
```
- scenario_id unique scenario identifier
- model_id model identifier (e.g., openai/gpt-4)
- question_type question type (ab: A or B?, repeat: Repeat the preferred answer, compare: Do you prefer A over B? )
- question_ordering question ordering label (0: default order, 1: flipped order)
- question_header question instruction header text
- question_text question text
- answer_raw raw answer of model
- decision semantic answer of model (e.g., action1, action2, refusal, invalid)
- eval_technique evaluation technique used
- eval_top_p evaluation parameter - top_p
- eval_temperature evaluation parameter - temperature
- timestamp timestamp of model access
```
## Dataset Creation
### Generation of Moral Scenarios
The construction of *MoralChoice* follows a three-step procedure:
- **Scenario Generation:** We generate seperately low- and high-ambiguity scenarios (i.e., the triple of scenario context, action 1 and action 2) guided by the 10 rules of Gert's common morality framework.
- **Low-Ambiguity Scenarios:** Zero-Shot Prompting Setup based on OpenAI's gpt-4
- **High-Ambiguity Scenarios:** Stochastic Few-Shot Prompting Setup based on OpenAI's text-davinci-003 using a a set of 100 hand-written scenarios
- **Scenario Curation:** We check the validity and grammar of each generated scenario manually and remove invalid scenarios. In addition, we assess lexical similarity between the generated scenarios and remove duplicates and overly-similar scenarios.
- **Auxiliarly Label Aquisition:** We acquire auxiliary rule violation labels through SurgeAI for every scenario.
For detailed information, we refer to the corresponding paper.
## Collection of LLM responses
Across all models, we employ **temperature-based sampling** with `top-p=1.0`and `temperature=1.0`. For every specific question form (unique combination of scenario, question template, answer option ordering), we collect multiple samples (5 for low-ambiguity scenarios and 10 for high-ambiguity scenarios). The raw sequence of token outputs were mapped to semantic action (see the corresponding paper for exact details).
### Annotations
To acquire high-quality annotations, we employ experienced annotators sourced through the data-labeling company [Surge AI](https://www.surgehq.ai/).
## Considerations for Using the Data
- Limited Diversity in Scenarios (professions, contexts)
- Limited Diversity in Question-Templates
- Limited to English
### Dataset Curators
- Nino Scherrer ([Website](https://ninodimontalcino.github.io/), [Mail](mailto:nino.scherrer@gmail.com?subject=[MoralChoice]))
- Claudia Shi ([Website](https://www.claudiajshi.com/), [Mail](mailto:nino.scherrer@gmail.com?subject=[MoralChoice]))
### Citation
```
@misc{scherrer2023moralchoice,
title={Evaluating the Moral Beliefs Encoded in LLMs},
author={Scherrer, Nino and Shi, Claudia, and Feder, Amir and Blei, David},
year={2023},
journal={arXiv:}
}
``` |
Irza/Arxiv_ph_indonesia | 2023-07-31T02:34:35.000Z | [
"task_categories:question-answering",
"language:id",
"license:mit",
"region:us"
] | Irza | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- question-answering
language:
- id
pretty_name: Arxiv Phisics Translated to Indonesian
--- |
smangrul/hf-stack-v1 | 2023-07-27T08:02:56.000Z | [
"region:us"
] | smangrul | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 91907731
num_examples: 5905
download_size: 30589828
dataset_size: 91907731
---
# Dataset Card for "hf-stack-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OdiaGenAI/odia_domain_context_train_v1 | 2023-08-06T09:09:54.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:or",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 0 | 4 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- or
pretty_name: odia_domain_context_train_v1
size_categories:
- 10K<n<100K
---
# Dataset Card for odia_domain_context_train_v1
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset contains 10K instructions that span various facets of Odisha's unique identity.
The instructions cover a wide array of subjects, ranging from the culinary delights in 'RECIPES,' the historical significance of 'HISTORICAL PLACES,' and
'TEMPLES OF ODISHA,' to the intellectual pursuits in 'ARITHMETIC,' 'HEALTH,' and 'GEOGRAPHY.'
It also explores the artistic tapestry of Odisha through 'ART AND CULTURE,' which celebrates renowned figures in 'FAMOUS ODIA POETS/WRITERS',
and 'FAMOUS ODIA POLITICAL LEADERS'.
Furthermore, it encapsulates 'SPORTS' and the 'GENERAL KNOWLEDGE OF ODISHA,' providing an all-encompassing representation of the state.
These instructions reflect Odisha's rich heritage and are a practical and engaging resource for building a conversational AI that resonates with the region's people.
In this dataset Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
## Data Fields
- output (string)
- instruction (string)
- input (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Aisha Asif (KIIT, University, Bhubaneswar, India)
- Subham Pradhan (Silicon Institute of Technology, Bhubaneswar, India)
- Shantipriya Parida (Silo AI, Helsinki, Finland)
- Sambit Sekhar (Odia Generative AI, Bhubaneswar, India)
|
h2oai/openassistant_oasst1_h2ogpt_llama2_chat | 2023-07-31T06:09:41.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | null | 0 | 4 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1_h2ogpt_llama2_chat` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `44219`
- Number of columns: `5`
- Column names: `['id', 'prompt_type', 'input', 'output', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/0bee5f50a74f489ca3fc81486f9322078360f2cb/src/create_data.py#L1296)
|
CyberHarem/saga_arknights | 2023-09-17T16:06:29.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of saga_arknights
This is the dataset of saga_arknights, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 461 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 461 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 461 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 461 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
TrainingDataPro/cut-2d-masks-presentation-attack-detection | 2023-09-14T16:36:05.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of videos of individuals wearing printed 2D masks or
printed 2D masks with cut-out eyes and directly looking at the camera.
Videos are filmed in different lightning conditions and in different places
(indoors, outdoors). Each video in the dataset has an approximate duration of 2
seconds. | @InProceedings{huggingface:dataset,
title = {cut-2d-masks-presentation-attack-detection},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
tags:
- finance
- legal
- code
dataset_info:
features:
- name: link
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 1452
num_examples: 48
download_size: 737352851
dataset_size: 1452
---
# Cut 2D Masks Presentation Attack Detection
The dataset consists of videos of individuals wearing printed 2D masks with cut-out holes for eyes, noses and mouths. Videos are filmed in different lightning conditions and in different places (*indoors, outdoors*), a person moves his/her head left, right, up and down. Each video in the dataset has an approximate duration of 7 seconds.
### Types of videos in the dataset:
- **2d_mask** - videos of the person wearing a printed 2D mask with cut-out holes for eyes.
- **cut_mask** - videos of the person wearing a printed 2D mask with cut-out holes for eyes, mouth and nose. All videos represent masks with holes for *eyes*, in some videos holes for both *mouth and nose* are made, in others only for *mouth or nose*.
.png?generation=1690468363734380&alt=media)
People in the dataset wear different accessorieses, such as *glasses, caps, scarfs, hats and masks*. Most of them are worn over a mask, however *glasses and masks* can be are also printed on the mask itself.
.png?generation=1690468790515642&alt=media)
The dataset serves as a valuable resource for computer vision, anti-spoofing tasks, video analysis, and security systems. It allows for the development of algorithms and models that can effectively detect attacks perpetrated by individuals wearing printed 2D masks.
Studying the dataset may lead to the development of improved security systems, surveillance technologies, and solutions to mitigate the risks associated with masked individuals carrying out attacks.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cut-2d-masks-presentation-attack-detection) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The dataset contains of two folders:
- **2d_masks** contains videos of the person wearing a printed 2D mask with cut-out holes for eyes.
- **cut_masks** includes videos of the person wearing a printed 2D mask with cut-out holes for eyes, mouth and nose.
### File with the extension .csv
- **link**: link to access the video,
- **type**: type of the attack: *with printed 2D mask with cut-out holes for eyes* OR *with printed 2D mask with cut-out holes for eyes, mouth and nose*
# Attacks might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cut-2d-masks-presentation-attack-detection) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
zjunlp/KnowLM-Tool | 2023-07-29T02:26:54.000Z | [
"region:us"
] | zjunlp | null | null | null | 1 | 4 | Entry not found |
Tverous/anli-amr | 2023-07-30T11:46:56.000Z | [
"region:us"
] | Tverous | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: reason
dtype: string
- name: claim_cleaned_amr
dtype: string
- name: amr_penman
dtype: string
- name: amr_tokens
sequence: string
- name: amr_nodes
dtype: string
- name: amr_alignments
dtype: string
- name: amr_edges
sequence:
sequence: string
splits:
- name: train
num_bytes: 146374351
num_examples: 100459
- name: dev
num_bytes: 1919899
num_examples: 1200
- name: test
num_bytes: 1907283
num_examples: 1200
download_size: 44471917
dataset_size: 150201533
---
# Dataset Card for "anli-amr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/saileach_arknights | 2023-09-17T16:09:03.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of saileach_arknights
This is the dataset of saileach_arknights, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 460 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 460 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 460 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 460 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
mylesmharrison/cornell-movie-dialog | 2023-08-01T02:03:08.000Z | [
"region:us"
] | mylesmharrison | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 21363514
num_examples: 304713
download_size: 13073496
dataset_size: 21363514
---
# Dataset Card for "cornell-movie-dialog"
This is a reduced version of the [Cornell Movie Dialog Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) by Cristian Danescu-Niculescu-Mizil.
The original dataset contains 220,579 conversational exchanges between 10,292 pairs of movie characters, involving 9,035 characters from 617 movies for a total 304,713 utterances.
This reduced version of the dataset contains only the character tags and utterances from the `movie_lines.txt` file, with one utterance per line, suitable for training generative text models.
## Dataset Description
- **Homepage:** https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html
- **Repository:** https://convokit.cornell.edu/documentation/movie.html
- **Paper:** [Chameleons in imagined conversations: A new approach to understanding
coordination of linguistic style in dialogs](https://www.cs.cornell.edu/~cristian/papers/chameleons.pdf)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
|
CyberHarem/breeze_arknights | 2023-09-17T16:17:14.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of breeze_arknights
This is the dataset of breeze_arknights, containing 16 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 16 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 42 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 16 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 16 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 16 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 16 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 16 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 42 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 42 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 42 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/serika_bluearchive | 2023-09-17T16:17:17.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of serika_bluearchive
This is the dataset of serika_bluearchive, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 560 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 560 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 560 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 560 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/quercus_arknights | 2023-09-17T16:17:19.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of quercus_arknights
This is the dataset of quercus_arknights, containing 39 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 39 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 89 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 39 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 39 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 39 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 39 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 39 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 89 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 89 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 89 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/lunacub_arknights | 2023-09-17T16:17:24.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of lunacub_arknights
This is the dataset of lunacub_arknights, containing 33 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 33 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 80 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 33 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 33 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 33 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 33 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 33 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 80 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 80 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 80 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/typhon_arknights | 2023-09-17T16:17:28.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of typhon_arknights
This is the dataset of typhon_arknights, containing 23 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 23 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 54 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 23 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 23 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 23 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 23 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 23 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 54 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 54 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 54 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/blacknight_arknights | 2023-09-17T16:17:33.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of blacknight_arknights
This is the dataset of blacknight_arknights, containing 46 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 46 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 109 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 46 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 46 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 46 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 46 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 46 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 109 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 109 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 109 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/raiden_shogun_genshin | 2023-09-17T16:17:36.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of raiden_shogun_genshin
This is the dataset of raiden_shogun_genshin, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 549 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 549 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 549 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 549 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/erato_arknights | 2023-09-17T16:17:41.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of erato_arknights
This is the dataset of erato_arknights, containing 19 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 19 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 41 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 19 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 19 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 19 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 19 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 19 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 41 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 41 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 41 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/pudding_arknights | 2023-09-17T16:17:45.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of pudding_arknights
This is the dataset of pudding_arknights, containing 21 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 21 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 46 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 21 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 21 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 21 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 21 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 21 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 46 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 46 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 46 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/abigail_williams_fgo | 2023-09-17T16:17:48.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of abigail_williams_fgo
This is the dataset of abigail_williams_fgo, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 451 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 451 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 451 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 451 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.