id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
lunarflu/the-hugging-face-case-for-open-AI | 2023-09-25T14:30:07.000Z | [
"region:us"
] | lunarflu | null | null | null | 0 | 0 | https://politico-tech.simplecast.com/episodes/the-hugging-face-case-for-open-ai |
CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars | 2023-09-25T14:32:55.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ousaka_shizuku (Love Live! School Idol Festival ALL STARS)
This is the dataset of ousaka_shizuku (Love Live! School Idol Festival ALL STARS), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 512 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 512 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 512 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 512 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Heroinfather/juju | 2023-09-25T14:36:06.000Z | [
"region:us"
] | Heroinfather | null | null | null | 0 | 0 | Entry not found |
Zaid/dummy | 2023-09-26T08:41:01.000Z | [
"region:us"
] | Zaid | null | null | null | 0 | 0 | # Dataset Card for dummy
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: [info]**
- **Repository: [info]**
- **Paper: [info]**
- **Leaderboard: [info]**
- **Point of Contact: [info]**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
lunarflu/open-source-generative-AI-at-hugging-face | 2023-09-25T14:56:49.000Z | [
"region:us"
] | lunarflu | null | null | null | 0 | 0 | https://www.youtube.com/watch?v=CV6UagCYo4c |
CyberHarem/yoneme_mei_lovelivesuperstar | 2023-09-25T15:02:26.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yoneme_mei (Love Live! Superstar!!)
This is the dataset of yoneme_mei (Love Live! Superstar!!), containing 125 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 125 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 315 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 125 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 125 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 125 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 125 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 125 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 315 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 315 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 315 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
yuansiwe/MJ_Pairs_1 | 2023-09-25T15:20:09.000Z | [
"region:us"
] | yuansiwe | null | null | null | 0 | 0 | Entry not found |
ssahir/REPV | 2023-09-26T20:02:06.000Z | [
"region:us"
] | ssahir | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: path
dtype: string
- name: file
dtype: string
- name: gender
dtype: string
- name: emotion
dtype: string
- name: speech
sequence: float32
splits:
- name: train
num_bytes: 380197186
num_examples: 1628
- name: test
num_bytes: 92682047
num_examples: 407
download_size: 0
dataset_size: 472879233
---
# Dataset Card for "REPV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bczhou/cc12m | 2023-09-25T16:07:07.000Z | [
"license:mit",
"region:us"
] | bczhou | null | null | null | 0 | 0 | ---
license: mit
---
|
CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars | 2023-09-25T16:29:07.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of konoe_kanata (Love Live! School Idol Festival ALL STARS)
This is the dataset of konoe_kanata (Love Live! School Idol Festival ALL STARS), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 507 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 507 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 507 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 507 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
serhatkurt/data_modelGenerated | 2023-09-25T16:50:00.000Z | [
"region:us"
] | serhatkurt | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1982154.0
num_examples: 16
download_size: 1983278
dataset_size: 1982154.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_modelGenerated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
serhatkurt/dataset_butterfly_newclass | 2023-09-25T16:57:07.000Z | [
"region:us"
] | serhatkurt | null | null | null | 0 | 0 | Entry not found |
katielink/dm_alphamissense | 2023-10-05T02:10:28.000Z | [
"license:cc-by-nc-sa-4.0",
"biology",
"region:us"
] | katielink | null | null | null | 0 | 0 | ---
license: cc-by-nc-sa-4.0
tags:
- biology
configs:
- config_name: gene_hg19
data_files: "AlphaMissense_gene_hg19.csv"
- config_name: gene_hg38
data_files: "AlphaMissense_gene_hg38.csv"
---
_Note: this database has been uploaded by Hugging Face staff. Please see the original paper, repo, and hosted database below for any questions._
# Google DeepMind AlphaMissense Database
<img src="https://www.science.org/cms/10.1126/science.adg7492/asset/e028b855-19a9-40ab-a39f-759afedb5b22/assets/images/large/science.adg7492-fa.jpg" alt="drawing" width="600"/>
- **Paper:** https://www.science.org/doi/10.1126/science.adg7492
- **Github Repo:** https://github.com/google-deepmind/alphamissense
- **Original Database:** https://console.cloud.google.com/storage/browser/dm_alphamissense
## File descriptions
* **AlphaMissense_hg19.tsv.gz, AlphaMissense_hg38.tsv.gz**: Predictions for all possible single nucleotide missense variants (71M) from 19k human
protein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These
files are sorted by genomic coordinates.
* **AlphaMissense_gene_hg19.tsv.gz, AlphaMissense_gene_hg38.tsv.gz**: Gene-level average predictions, which were computed by taking the mean
alphamissense_pathogenicity over all possible missense variants in a transcript
(canonical transcript).
* **AlphaMissense_aa_substitutions.tsv.gz**: Predictions for all possible single amino acid substitutions within 20k UniProt canonical
isoforms (216M protein variants). These are a superset of the amino acid substitutions
induced by single nucleotide missense variants. This file uses UniProt accession
numbers for proteins and does not have genomic coordinates.
* **AlphaMissense_isoforms_hg38.tsv.gz**: Predictions for all possible missense variants for 60k non-canonical transcript isoforms
(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.
Predictions for non-canonical isoforms were not thoroughly evaluated and should be
used with caution. This file is sorted by genomic coordinates.
* **AlphaMissense_isoforms_aa_substitutions.tsv.gz**: Predictions for all possible single amino acid substitutions for 60k non-canonical
transcript isoforms (GENCODE V32). These are a superset of the amino acid
substitutions induced by single nucleotide missense variants.This file has transcript_id
but no UniProt accession numbers.
All transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).
Canonical transcripts are defined as described in the publication.
All files are compressed with bgzip.
## Column descriptions
**Note**: Not all columns are present in every file.
- **CHROM**
The chromosome as a string: `chr<N>`, where N is one of [1-22, X, Y, M].
- **POS**
Genome position (1-based).
- **REF**
The reference nucleotide (GRCh38.p13 for hg38, GRCh37.p13 for hg19).
- **ALT**
The alternative nucleotide.
- **genome**
The genome build, hg38 or hg19.
- **uniprot_id**
UniProtKB accession number of the protein in which the variant induces a single amino-acid substitution (UniProt release 2021_02).
- **transcript_id**
Ensembl transcript ID from GENCODE V27 (hg19) or V32 (hg38).
- **protein_variant**
Amino acid change induced by the alternative allele, in the format `<Reference amino acid><POS_aa><Alternative amino acid>` (e.g. V2L). POS_aa is the 1-based position of the residue within the protein amino acid sequence.
- **am_pathogenicity**
Calibrated AlphaMissense pathogenicity scores (ranging between 0 and 1), which can be interpreted as the predicted probability of a variant being clinically pathogenic.
- **am_class**
Classification of the `protein_variant` into one of three discrete categories: 'likely_benign', 'likely_pathogenic', or 'ambiguous'. These are derived using the following thresholds: 'likely_benign' if `alphamissense_pathogenicity` < 0.34; 'likely_pathogenic' if `alphamissense_pathogenicity` > 0.564; and 'ambiguous' otherwise.
- **mean_am_pathogenicity**
The average `alphamissense_pathogenicity` of all missense variants per transcript.
## Citation/license and disclaimer
AlphaMissense Database Copyright (2023) DeepMind Technologies Limited. All predictions are provided for non-commercial research use only under [CC BY-NC-SA license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
Researchers interested in predictions not yet provided, and for non-commercial use, can send an expression of interest to [alphamissense@google.com](mailto:alphamissense@google.com).
## Disclaimer
The AlphaMissense Database and other information provided on this site is for theoretical modelling only, caution should be exercised in use. It is provided “as-is” without any warranty of any kind, whether express or implied. For clarity, no warranty is given that use of the information shall not infringe the rights of any third party. The information provided is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. The predictions in the AlphaMissense Database are predictions only, with varying levels of confidence and should be interpreted carefully.
## Citation
If you use this resource for your research please cite the following publication:
“Accurate proteome-wide missense variant effect prediction with AlphaMissense”
Jun Cheng, Guido Novati, Joshua Pan, Clare Bycroft, Akvilė Žemgulytė, Taylor Applebaum, Alexander Pritzel, Lai Hong Wong, Michal Zielinski, Tobias Sargeant, Rosalia G. Schneider, Andrew W. Senior, John Jumper, Demis Hassabis, Pushmeet Kohli, Žiga Avsec
Use of the AlphaMissense Database is subject to [Google Cloud Platform Terms of Service](https://cloud.google.com/terms).
|
udmurtNLP/udmurt-bible-parallel-corpora | 2023-09-28T16:31:55.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:udm",
"region:us"
] | udmurtNLP | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: udm
dtype: string
- name: ru
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 15350364
num_examples: 33752
download_size: 6172011
dataset_size: 15350364
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
size_categories:
- 10K<n<100K
language:
- udm
---
# About dataset
Source: http://finugorbib.com/index.html |
CyberHarem/maruyama_aya_bangdream | 2023-09-25T17:01:53.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of maruyama_aya (BanG Dream!)
This is the dataset of maruyama_aya (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 457 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 457 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 457 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 457 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars | 2023-09-25T17:02:44.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nakasu_kasumi (Love Live! School Idol Festival ALL STARS)
This is the dataset of nakasu_kasumi (Love Live! School Idol Festival ALL STARS), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 519 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 519 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 519 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 519 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
jimmybaek/jimmybaek-llama2-826 | 2023-09-25T17:03:58.000Z | [
"region:us"
] | jimmybaek | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3304
num_examples: 826
download_size: 715
dataset_size: 3304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jimmybaek-llama2-826"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HYF12053/SDXL | 2023-09-25T18:38:02.000Z | [
"region:us"
] | HYF12053 | null | null | null | 0 | 0 | Entry not found |
Weni/Zeroshot_Test-Gold-1K_nenhuma | 2023-09-26T12:40:22.000Z | [
"region:us"
] | Weni | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: texto
dtype: string
- name: true_class
dtype: string
- name: BERT
dtype: string
splits:
- name: test
num_bytes: 188891
num_examples: 1000
download_size: 54999
dataset_size: 188891
---
# Dataset Card for "Zeroshot_Gold_Test-1K_nenhuma"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_nenhuma")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anonymoussubmissions/text2tile_large | 2023-09-25T17:45:31.000Z | [
"region:us"
] | anonymoussubmissions | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4317901388.0
num_examples: 164662
download_size: 4276914179
dataset_size: 4317901388.0
---
# Dataset Card for "text2tile_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/yurucamp | 2023-09-29T11:58:27.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Yuru Camp
This is the image base of bangumi Yuru Camp, we detected 25 characters, 3285 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 772 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 9 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 10 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 158 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 242 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 15 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 49 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 41 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 60 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 218 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 60 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 20 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 12 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 478 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 52 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 17 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 22 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 17 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 26 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 770 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 33 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 21 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 22 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 146 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
trevin-wadu/npm3d-kitti-carla | 2023-09-27T06:24:04.000Z | [
"license:other",
"region:us"
] | trevin-wadu | null | null | null | 0 | 0 | ---
license: other
---
|
CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub | 2023-09-25T18:09:56.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mifune_shioriko (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of mifune_shioriko (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 507 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 507 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 507 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 507 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Alific/microsoftexel | 2023-09-25T18:15:55.000Z | [
"region:us"
] | Alific | null | null | null | 0 | 0 | Entry not found |
Weni/Zeroshot_Test-Gold-1K_bias | 2023-09-25T18:43:42.000Z | [
"region:us"
] | Weni | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: texto
dtype: string
- name: true_class
dtype: string
- name: BERT
dtype: string
splits:
- name: test
num_bytes: 183928
num_examples: 1000
download_size: 54527
dataset_size: 183928
---
# Dataset Card for "Zeroshot_Gold_Test-1K_bias"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'bias' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_bias")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tsurumaki_kokoro_bangdream | 2023-09-25T18:47:39.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tsurumaki_kokoro (BanG Dream!)
This is the dataset of tsurumaki_kokoro (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 477 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 477 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 477 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 477 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Weni/Zeroshot_Test-Gold-1K_other | 2023-09-25T18:56:09.000Z | [
"region:us"
] | Weni | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: texto
dtype: string
- name: true_class
dtype: string
- name: BERT
dtype: string
splits:
- name: test
num_bytes: 186270
num_examples: 1000
download_size: 55029
dataset_size: 186270
---
# Dataset Card for "Zeroshot_Gold_Test-1K_other"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'other' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_other")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yimingzhang/llms-hw2 | 2023-09-25T19:03:56.000Z | [
"region:us"
] | yimingzhang | null | null | null | 0 | 0 | Entry not found |
CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars | 2023-09-25T18:57:50.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of uehara_ayumu (Love Live! School Idol Festival ALL STARS)
This is the dataset of uehara_ayumu (Love Live! School Idol Festival ALL STARS), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 485 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 485 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 485 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 485 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
amitness/logits-mt-it-en-128 | 2023-09-27T10:27:21.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: teacher_logits
sequence:
sequence: float64
- name: teacher_indices
sequence:
sequence: int64
- name: teacher_mask_indices
sequence: int64
splits:
- name: train
num_bytes: 184467361976
num_examples: 40721350
- name: test
num_bytes: 32556394204
num_examples: 7186121
download_size: 0
dataset_size: 217023756180
---
# Dataset Card for "logits-mt-it-en-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
okaychamp/mymodel | 2023-09-25T19:34:22.000Z | [
"region:us"
] | okaychamp | null | null | null | 0 | 0 | Entry not found |
marasama/nva-agano2 | 2023-09-25T19:38:21.000Z | [
"region:us"
] | marasama | null | null | null | 0 | 0 | Entry not found |
aip0p/arangoqueries | 2023-09-25T19:41:05.000Z | [
"region:us"
] | aip0p | null | null | null | 0 | 0 | Entry not found |
AmelieSchreiber/1111K_binding_sites | 2023-10-01T01:23:16.000Z | [
"license:mit",
"region:us"
] | AmelieSchreiber | null | null | null | 0 | 0 | ---
license: mit
---
This dataset is curated from UniProt. The test set was created by selecting entire families of proteins to separate out at random.
The train/test split is approximately 80/20. All binding site and active site annotations were merged. All sequences longer than
1000 amino acids were split into non-overlapping chunks of 1000 residues or less. |
CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub | 2023-09-25T19:54:20.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of zhong_lanzhu (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of zhong_lanzhu (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 521 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 521 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 521 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 521 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
sarinasabharwal19/sarina | 2023-09-25T20:05:51.000Z | [
"region:us"
] | sarinasabharwal19 | null | null | null | 0 | 0 | Entry not found |
atmallen/amazon_polarity_embeddings_random1 | 2023-09-26T05:23:05.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: embedding
sequence: float32
- name: title
dtype: string
splits:
- name: train
num_bytes: 7148364432
num_examples: 3600000
- name: test
num_bytes: 19940712
num_examples: 10000
download_size: 3902806188
dataset_size: 7168305144
---
# Dataset Card for "amazon_polarity_embeddings_random1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/emma_verde_loveliveschoolidolfestivalallstars | 2023-09-25T20:30:43.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of emma_verde (Love Live! School Idol Festival ALL STARS)
This is the dataset of emma_verde (Love Live! School Idol Festival ALL STARS), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 490 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 490 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 490 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 490 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/mitake_ran_bangdream | 2023-09-25T21:02:50.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mitake_ran (BanG Dream!)
This is the dataset of mitake_ran (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 516 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 516 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 516 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 516 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub | 2023-09-25T21:48:12.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of asaka_karin (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of asaka_karin (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 496 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 496 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 496 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 496 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Globaly/categories | 2023-09-25T21:53:08.000Z | [
"region:us"
] | Globaly | null | null | null | 0 | 0 | Entry not found |
Tonic/image-classification-mistakes | 2023-09-25T22:00:35.000Z | [
"region:us"
] | Tonic | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
riquinho21/fb | 2023-09-25T22:30:27.000Z | [
"license:other",
"region:us"
] | riquinho21 | null | null | null | 0 | 0 | ---
license: other
---
|
AescF/common_language_preprocessed | 2023-09-25T23:40:35.000Z | [
"region:us"
] | AescF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: sentence
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: label
dtype:
class_label:
names:
'0': Arabic
'1': Basque
'2': Breton
'3': Catalan
'4': Chinese_China
'5': Chinese_Hongkong
'6': Chinese_Taiwan
'7': Chuvash
'8': Czech
'9': Dhivehi
'10': Dutch
'11': English
'12': Esperanto
'13': Estonian
'14': French
'15': Frisian
'16': Georgian
'17': German
'18': Greek
'19': Hakha_Chin
'20': Indonesian
'21': Interlingua
'22': Italian
'23': Japanese
'24': Kabyle
'25': Kinyarwanda
'26': Kyrgyz
'27': Latvian
'28': Maltese
'29': Mangolian
'30': Persian
'31': Polish
'32': Portuguese
'33': Romanian
'34': Romansh_Sursilvan
'35': Russian
'36': Sakha
'37': Slovenian
'38': Spanish
'39': Swedish
'40': Tamil
'41': Tatar
'42': Turkish
'43': Ukranian
'44': Welsh
- name: input_values
sequence: float32
- name: attention_mask
sequence: int32
splits:
- name: train
num_bytes: 13848986619
num_examples: 22194
- name: validation
num_bytes: 3461442109
num_examples: 5888
- name: test
num_bytes: 3473659131
num_examples: 5963
download_size: 0
dataset_size: 20784087859
---
# Dataset Card for "common_language_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/amazon_polarity_embeddings_random2 | 2023-09-26T07:22:35.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: embedding
sequence: float32
- name: title
dtype: string
splits:
- name: train
num_bytes: 7148364432
num_examples: 3600000
- name: test
num_bytes: 19940712
num_examples: 10000
download_size: 3900873029
dataset_size: 7168305144
---
# Dataset Card for "amazon_polarity_embeddings_random2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrfakename/librivox-full-catalog-archive | 2023-09-25T22:43:02.000Z | [
"license:cc0-1.0",
"region:us"
] | mrfakename | null | null | null | 0 | 0 | ---
license: cc0-1.0
---
# LibriVox Catalog Archive
Note: this archive does not include any audio files, but simply includes the catalog.
## What is LibriVox?
LibriVox is a catalog of free and public domain audiobooks. [Learn more...](https://librivox.org/)
Last updated: Sep 25, 2023 |
CyberHarem/minato_yukina_bangdream | 2023-09-25T22:44:43.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of minato_yukina (BanG Dream!)
This is the dataset of minato_yukina (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 437 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 437 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 437 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 437 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Pixelatory/GDB-11 | 2023-09-25T23:10:50.000Z | [
"size_categories:10M<n<100M",
"chemistry",
"biology",
"region:us"
] | Pixelatory | null | null | null | 0 | 0 | ---
tags:
- chemistry
- biology
size_categories:
- 10M<n<100M
---
26,425,839 samples.
Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-11" dataset found at https://gdb.unibe.ch/downloads/. |
Pixelatory/GDB-13 | 2023-09-25T23:10:33.000Z | [
"size_categories:100M<n<1B",
"chemistry",
"biology",
"region:us"
] | Pixelatory | null | null | null | 0 | 0 | ---
tags:
- chemistry
- biology
size_categories:
- 100M<n<1B
---
975,820,226 samples.
Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-13" dataset found at https://gdb.unibe.ch/downloads/. |
Manduzamzam/manduzamzam | 2023-09-25T22:54:19.000Z | [
"region:us"
] | Manduzamzam | null | null | null | 0 | 0 | Entry not found |
Manduzamzam/Hi | 2023-09-26T05:42:50.000Z | [
"region:us"
] | Manduzamzam | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 5886291.0
num_examples: 112
- name: validation
num_bytes: 63094.0
num_examples: 10
download_size: 0
dataset_size: 5949385.0
---
# Dataset Card for "Hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub | 2023-09-25T22:59:05.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mia_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of mia_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 515 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 515 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 515 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 515 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Pixelatory/GDB-17 | 2023-09-25T23:18:47.000Z | [
"size_categories:10M<n<100M",
"chemistry",
"biology",
"region:us"
] | Pixelatory | null | null | null | 0 | 0 | ---
tags:
- chemistry
- biology
size_categories:
- 10M<n<100M
---
49,999,990 samples.
Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "GDB-17-Set" dataset found at https://gdb.unibe.ch/downloads/. |
Pixelatory/PubChem-04-30-2023 | 2023-09-25T23:28:43.000Z | [
"size_categories:100M<n<1B",
"chemistry",
"biology",
"region:us"
] | Pixelatory | null | null | null | 0 | 0 | ---
tags:
- chemistry
- biology
size_categories:
- 100M<n<1B
---
114,218,565 samples.
Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the PubChem dataset found at https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/. PubChem compounds collected in 30 April 2023. |
PandaGuy/Climate_Change_101-125 | 2023-09-26T00:25:51.000Z | [
"region:us"
] | PandaGuy | null | null | null | 0 | 0 | Entry not found |
atmallen/amazon_polarity_embeddings_random3 | 2023-09-26T17:30:32.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: embedding
sequence: float32
- name: title
dtype: string
splits:
- name: train
num_bytes: 7148364432
num_examples: 3600000
- name: test
num_bytes: 19940712
num_examples: 10000
download_size: 3898006107
dataset_size: 7168305144
---
# Dataset Card for "amazon_polarity_embeddings_random3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/amazon_polarity_embeddings_random4 | 2023-09-26T18:59:15.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: embedding
sequence: float32
- name: title
dtype: string
splits:
- name: train
num_bytes: 7148364432
num_examples: 3600000
- name: test
num_bytes: 19940712
num_examples: 10000
download_size: 3912035793
dataset_size: 7168305144
---
# Dataset Card for "amazon_polarity_embeddings_random4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smazu001/wiki_medical_terms_llama2 | 2023-09-26T00:44:59.000Z | [
"region:us"
] | smazu001 | null | null | null | 0 | 0 | Entry not found |
DavidLanz/yentinglin-traditional_mandarin_instructions | 2023-09-26T01:37:42.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 0 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Language Models for Taiwanese Culture training dataset.
## Citation
Please cite the repo if you use the data or code in this repo.
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
``` |
solomoon/mypic | 2023-09-26T01:21:22.000Z | [
"license:openrail",
"region:us"
] | solomoon | null | null | null | 0 | 0 | ---
license: openrail
---
|
mickylan2367/ParquetPractice | 2023-10-07T10:54:50.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"music",
"region:us"
] | mickylan2367 | null | null | null | 0 | 0 | ---
license: cc-by-4.0
language:
- en
tags:
- music
size_categories:
- 1K<n<10K
---
* Google/MusicCapsのデータをスペクトログラムにしたもの。
* 内容はmickylan2367/ColorSpectrogramと同じ(パケットファイルにしただけ)
* 基本的に、このリポジトリはHuggingfaceの実験場。 |
pykeio/ap-cori | 2023-09-26T01:32:04.000Z | [
"license:cc0-1.0",
"region:us"
] | pykeio | null | null | null | 0 | 0 | ---
license: cc0-1.0
---
|
solomoon/sd-configs-4 | 2023-09-26T01:32:50.000Z | [
"region:us"
] | solomoon | null | null | null | 0 | 0 | Entry not found |
Nasssss/ThiagoFinch | 2023-09-26T01:42:14.000Z | [
"region:us"
] | Nasssss | null | null | null | 0 | 0 | Entry not found |
joelyu/standard_dataset | 2023-09-26T10:27:57.000Z | [
"region:us"
] | joelyu | null | null | null | 0 | 0 | Entry not found |
CyberHarem/matsubara_kanon_bangdream | 2023-09-26T01:44:42.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of matsubara_kanon (BanG Dream!)
This is the dataset of matsubara_kanon (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 465 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 465 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 465 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 465 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
DavidLanz/yentinglin-zh_TW_c4 | 2023-09-26T01:58:39.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 0 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Language Models for Taiwanese Culture training dataset.
## Citation
Please cite the repo if you use the data or code in this repo.
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
``` |
DavidLanz/traditional-mandarin-input-output | 2023-09-26T02:26:47.000Z | [
"license:cc-by-4.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 0 | ---
license: cc-by-4.0
---
|
kurusunagisa/jqac | 2023-09-26T02:20:25.000Z | [
"region:us"
] | kurusunagisa | null | null | null | 0 | 0 | # Japanese Question-Answering Corpus (JQAC)
This work was supported by Works Appilcations Co., Ltd.
This containt is managed by Hiroki Tanioka (taniokah[at]gmail.com), since 2018.
[https://taniokah.github.io/jqac/](https://taniokah.github.io/jqac/)
---
license: cc-by-sa-4.0
---
|
kephalian/Ear_drum_identification | 2023-09-26T02:38:32.000Z | [
"task_categories:object-detection",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"doi:10.57967/hf/1151",
"region:us"
] | kephalian | null | null | null | 0 | 0 | ---
license: apache-2.0
task_categories:
- object-detection
language:
- en
size_categories:
- n<1K
---
This is a Yolo format dataset with images annotated using Roboflow.
All the images are of healthy, normal human ear drums or tympanic membranes.
Both right and left tympanic membranes are included.
The idea was to create a model to identify normal versus diseased ear drums (mostly by the absence of light reflex).
The model was able to reach 100% accuracy with this dataset in correctly identifying the presence of light reflex. |
DavidLanz/zh_TW_c4 | 2023-09-26T06:08:24.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 0 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Language Models for Taiwanese Culture training dataset.
## Citation
Please cite the repo if you use the data or code in this repo.
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
``` |
CyberHarem/shirokane_rinko_bangdream | 2023-09-26T02:55:24.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shirokane_rinko (BanG Dream!)
This is the dataset of shirokane_rinko (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 438 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 438 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 438 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 438 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
tat1111/cad_blockchain_smartcontract | 2023-10-09T02:36:03.000Z | [
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"smartcontract",
"blockchain",
"region:us"
] | tat1111 | null | null | null | 0 | 0 | ---
license: afl-3.0
language:
- en
tags:
- smartcontract
- blockchain
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
This dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on.
Besides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs.
### Dataset Summary
This dataset contains of 877 rows in total. Each row includes the following features:
- token name(string) : the name of the top token we collect from etherscan
- max_total_supply(integer): max total supply of this token
- holders(string): holders number of this token
- total_transfers(integer): token transfer transaction number of this token
- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply
- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply
- contract_address(string): address of this token
- source_code(string): source code of the contracts
- abi(string): abi, application binary interface of the source code.
- tags(json): the llm analysis of the source code display in json type. The structure of tags is :
```python
{
"Pragma": <Pragma>,
"Contracs": [
{
"name": "<Contact_name>",
"role": "<Contract_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
"Interface": [
{
"name": "<Interface_name>",
"role": "<Interface_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
"Library": [
{
"name": "<Library_name>",
"role": "<Library_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
}
```
tags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- The dataset is in the English language (en).
- Smart contracts (source code ) are in Solidity programming language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- token name(string) : the name of the top token we collect from etherscan
- max_total_supply(integer): max total supply of this token
- holders(string): holders number of this token
- total_transfers(integer): token transfer transaction number of this token
- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply
- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply
- contract_address(string): address of this token
- source_code(string): source code of the contracts
- abi(string): abi, application binary interface of the source code.
- tags(json): the llm analysis of the source code display in json type. The structure of tags is :
## Dataset Creation
To collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token.
As for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM”
This tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part.
And we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
|
CyberHarem/saito_ena_yurucamp | 2023-09-26T15:16:07.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Saitō Ena
This is the dataset of Saitō Ena, containing 268 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 268 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 653 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 756 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 268 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 268 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 268 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 653 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 653 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 562 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not exceeding 512x512 pixels. |
| stage3-eyes-640 | 756 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 756 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
MerHS/tokenized-codeparrot-train | 2023-09-26T03:31:35.000Z | [
"region:us"
] | MerHS | null | null | null | 0 | 0 | Entry not found |
abrahamzelano/uwu | 2023-09-26T03:34:16.000Z | [
"license:openrail",
"region:us"
] | abrahamzelano | null | null | null | 0 | 0 | ---
license: openrail
---
|
kye/all-huggingface-python-code-2 | 2023-09-26T03:35:39.000Z | [
"region:us"
] | kye | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: python_code
dtype: string
- name: repo_name
dtype: string
- name: file_path
dtype: string
splits:
- name: train
num_bytes: 28487113
num_examples: 2634
download_size: 8685205
dataset_size: 28487113
---
# Dataset Card for "all-huggingface-python-code-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kye/all-torvalds-c-code-1 | 2023-09-26T03:43:29.000Z | [
"region:us"
] | kye | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: python_code
dtype: string
- name: repo_name
dtype: string
- name: file_path
dtype: string
splits:
- name: train
num_bytes: 651565756
num_examples: 33893
download_size: 231630597
dataset_size: 651565756
---
# Dataset Card for "all-torvalds-c-code-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/hikawa_sayo_bangdream | 2023-09-26T04:05:30.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hikawa_sayo (BanG Dream!)
This is the dataset of hikawa_sayo (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 439 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 439 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 439 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 439 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/kagamihara_nadeshiko_yurucamp | 2023-09-26T16:16:24.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Kagamihara Nadeshiko
This is the dataset of Kagamihara Nadeshiko, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 701 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 788 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 701 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 701 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 602 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 788 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 788 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
DavidLanz/alpaca-gpt4-tw-input-output-48k | 2023-09-26T04:56:44.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"gpt3",
"alpaca",
"instruction-finetuning",
"region:us"
] | DavidLanz | null | null | null | 0 | 0 | ---
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: cc-by-4.0
tags:
- gpt3
- alpaca
- instruction-finetuning
---
# Dataset Card for "alpaca-gpt4-tw-input-output-48k"
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/ntunlplab/traditional-chinese-alpaca. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT3:
```bash
{
'instruction': '辨識那一個平台與其他不同。',
'input': '輸入:Twitter,Instagram,Telegram。',
'output': 'Twitter,Instagram 和 Telegram 都是流行的社群媒體平台,但它們之間也有一些不同之處:\n\n- Twitter: 是一個Twitter社群媒體平台,使用者可以發佈 280 字元以內的訊息(即推文)並與其他使用者互動。它主要關注文字交流,儘管使用者也可以分享照片和視訊。\n- Instagram: 是一個圖片和視訊分享的社群媒體平台,使用者可以在 Instagram 上發佈照片和視訊並與其他使用者互動。它主要關注圖片和視訊的分享,儘管使用者也可以新增文字說明。\n- Telegram: 是一個即時通訊和社群媒體平台,使用者可以通過文字、語音、圖片和視訊與其他使用者即時交流。與 Twitter 和 Instagram 不同,Telegram 更側重於即時通訊,它還提供了端到端加密的私人聊天功能。'
}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
CyberHarem/ogaki_chiaki_yurucamp | 2023-09-26T17:01:54.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Ōgaki Chiaki
This is the dataset of Ōgaki Chiaki, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 759 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 908 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 759 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 759 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 636 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 908 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 908 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/inuyama_aoi_yurucamp | 2023-09-26T17:50:52.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Inuyama Aoi
This is the dataset of Inuyama Aoi, containing 299 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 739 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 843 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 739 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 739 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 626 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 843 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 843 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/imai_lisa_bangdream | 2023-09-26T05:31:19.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of imai_lisa (BanG Dream!)
This is the dataset of imai_lisa (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 447 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 447 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 447 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 447 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/hikawa_hina_bangdream | 2023-09-26T05:40:46.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hikawa_hina (BanG Dream!)
This is the dataset of hikawa_hina (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 423 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 423 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 423 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 423 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/shima_rin_yurucamp | 2023-09-26T18:34:44.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Shima Rin
This is the dataset of Shima Rin, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 721 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 799 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 721 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 721 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 628 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 799 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 799 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/toba_minami_yurucamp | 2023-09-26T18:50:37.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Toba Minami
This is the dataset of Toba Minami, containing 100 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 100 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 237 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 269 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 100 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 100 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 100 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 237 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 237 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 207 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 269 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 269 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
SamuelGalaxys/iJupyter | 2023-09-27T08:34:25.000Z | [
"license:apache-2.0",
"region:us"
] | SamuelGalaxys | null | null | null | 0 | 0 | ---
license: apache-2.0
---
# 中国行政区划信息
数据演示地址:[https://passer-by.com/data_location/](https://passer-by.com/data_location/)
三级联动插件:[https://jquerywidget.com/jquery-citys/](https://jquerywidget.com/jquery-citys/)
### 版权
数据库由 [passer-by.com](https://passer-by.com/) 整理,获取最新最全的数据还请关注此项目。
### 数据说明
- 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威;
- 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日);
- 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新;
- 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录;
(行政管理区通常包含:***经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分)
### 关于行政区划代码
使用《中华人民共和国行政区划代码》国家标准(GB/T2260).
这部分可分为三个层次,从左到右的含义分别是:
- 第一、二位表示省(自治区、直辖市、特别行政区)
- 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码)
- 第五、六位表示县(市辖区、县级市、旗).
#### 代码标准
* [中华人民共和国民政部-中华人民共和国行政区划代码](https://www.mca.gov.cn/mzsj/xzqh/2022/202201xzqh.html)
* [中华人民共和国国家统计局-统计用区划和城乡划分代码](http://www.stats.gov.cn/sj/tjbz/tjyqhdmhcxhfdm/2022/index.html)
* [中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则](http://www.stats.gov.cn/sj/tjbz/gjtjbz/202302/t20230213_1902741.html)
港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。
### 反馈
如果有哪些地方数据错误或者更新不及时,还请告知(在"Issues"中留言),以便尽快更新~
|
rahulmnavneeth/SAMPLE-D | 2023-09-26T06:20:11.000Z | [
"region:us"
] | rahulmnavneeth | null | null | null | 0 | 0 | Entry not found |
BangumiBase/nonnonbiyori | 2023-09-29T12:10:36.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Non Non Biyori
This is the image base of bangumi Non Non Biyori, we detected 30 characters, 4423 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 692 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 576 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 56 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 18 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 13 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 161 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 37 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 37 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 591 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 18 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 15 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 194 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 34 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 36 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 174 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 14 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 52 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 92 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 20 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 1032 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 27 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 7 | [Download](23/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 24 | 177 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 83 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 49 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 51 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 13 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 112 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Coroseven/YorForger | 2023-09-26T06:27:26.000Z | [
"region:us"
] | Coroseven | null | null | null | 0 | 0 | Entry not found |
TalTechNLP/samsum_ee | 2023-10-09T09:41:39.000Z | [
"license:cc-by-4.0",
"region:us"
] | TalTechNLP | null | null | null | 0 | 0 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: en_dialogue
dtype: string
- name: en_summary
dtype: string
splits:
- name: train
num_bytes: 18445826
num_examples: 14732
- name: test
num_bytes: 1091645
num_examples: 819
- name: validation
num_bytes: 1059869
num_examples: 818
download_size: 13190777
dataset_size: 20597340
---
|
Daya7624/Web_MD_Dataset_V0 | 2023-09-26T06:46:34.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"medical",
"region:us"
] | Daya7624 | null | null | null | 0 | 0 | ---
task_categories:
- text-generation
tags:
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
TanvirOnHF/fakescience | 2023-09-26T10:27:08.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cdla-sharing-1.0",
"GPT-3.5",
"GPT-4",
"Claude",
"Bard",
"Alpaca",
"LLaMA",
"LLaMA-2",
"Vicuna",
"PaLM-2",
"region:us"
] | TanvirOnHF | null | null | null | 0 | 0 | ---
license: cdla-sharing-1.0
pretty_name: FakeScience
tags:
- GPT-3.5
- GPT-4
- Claude
- Bard
- Alpaca
- LLaMA
- LLaMA-2
- Vicuna
- PaLM-2
language:
- en
size_categories:
- 1K<n<10K
---
# FakeScience [JSON dataset]
A dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2.
## Dataset Contents
The dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge.
## Prompt
The prompt used:
```txt
Generate a JSON-formatted list of fictitious scientific entries, each comprising with a title, a description, and an assigned type (concept, fact, hypothesis, law, or theory). All the entries should be crafted to appear logically plausible, creating an illusion of scientific credibility.
Adhere to the specified structure:
'''json
[
{
"title": "...",
"description": "...",
"type": "concept"
},
{
"title": "...",
"description": "...",
"type": "fact"
},
{
"title": "...",
"description": "...",
"type": "hypothesis"
},
{
"title": "...",
"description": "...",
"type": "law"
},
{
"title": "...",
"description": "...",
"type": "theory"
}
]
'''
```
## Disclaimer
Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
|
TalTechNLP/dialogsum_ee | 2023-10-10T14:42:37.000Z | [
"license:cc-by-4.0",
"region:us"
] | TalTechNLP | null | null | null | 0 | 0 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: en_dialogue
dtype: string
- name: en_summary
dtype: string
splits:
- name: train
num_bytes: 22666234
num_examples: 12460
- name: validation
num_bytes: 881912
num_examples: 500
- name: test
num_bytes: 2703111
num_examples: 1500
download_size: 14384437
dataset_size: 26251257
---
|
CyberHarem/shirasagi_chisato_bangdream | 2023-09-26T06:47:52.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shirasagi_chisato (BanG Dream!)
This is the dataset of shirasagi_chisato (BanG Dream!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 458 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 458 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 458 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 458 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
ekshat/text-2-sql-with-context | 2023-09-26T07:18:08.000Z | [
"region:us"
] | ekshat | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32317282.06065388
num_examples: 74648
- name: test
num_bytes: 1700977.939346119
num_examples: 3929
download_size: 8982199
dataset_size: 34018260.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text-2-sql-with-context"
This dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : https://huggingface.co/ekshat/Llama-2-7b-chat-finetune-for-text2sql |
aghbb/autotrain-data-vit | 2023-09-26T07:25:42.000Z | [
"region:us"
] | aghbb | null | null | null | 0 | 0 | Entry not found |
veezbo/phinc | 2023-09-26T07:36:01.000Z | [
"task_categories:translation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:hi",
"license:cc-by-4.0",
"region:us"
] | veezbo | null | null | null | 1 | 0 | ---
license: cc-by-4.0
task_categories:
- translation
- text2text-generation
language:
- en
- hi
pretty_name: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation
size_categories:
- 10K<n<100K
---
# Description
PHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations.
# Credit
All credit goes to:
[PHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation](https://aclanthology.org/2020.wnut-1.7) (Srivastava & Singh, WNUT 2020)
# Original Abstract
Code-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation.
## Note
This data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found [here](https://zenodo.org/record/3605597). |
zakester/WisdomLM-dataset | 2023-09-26T07:39:11.000Z | [
"region:us"
] | zakester | null | null | null | 0 | 0 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.