id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
dummyuser/gene | 2023-10-06T03:03:26.000Z | [
"region:us"
] | dummyuser | null | null | null | 0 | 0 | Entry not found |
dummyuser/genAI | 2023-10-06T03:17:54.000Z | [
"region:us"
] | dummyuser | null | null | null | 0 | 0 | Entry not found |
mariamjamal001/kws_bg | 2023-10-06T03:55:50.000Z | [
"region:us"
] | mariamjamal001 | null | null | null | 0 | 0 | Entry not found |
BangumiBase/maoujoudeoyasumi | 2023-10-06T05:15:32.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Maoujou De Oyasumi
This is the image base of bangumi Maoujou de Oyasumi, we detected 21 characters, 1076 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 9 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 12 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 22 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 195 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 21 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 46 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 8 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 36 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 72 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 69 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 15 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 15 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 10 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 12 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 396 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 21 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 21 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 60 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/punchline | 2023-10-06T05:17:13.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Punch Line
This is the image base of bangumi Punch Line, we detected 17 characters, 1203 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 104 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 12 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 134 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 47 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 135 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 16 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 11 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 14 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 14 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 21 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 18 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 40 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 150 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 324 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 14 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 9 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 140 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/suzumiyaharuhinoyuuutsu | 2023-10-06T07:11:00.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Suzumiya Haruhi No Yuuutsu
This is the image base of bangumi Suzumiya Haruhi no Yuuutsu, we detected 22 characters, 4994 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1639 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 563 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 606 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 72 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 27 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 103 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 796 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 23 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 22 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 453 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 124 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 67 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 19 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 49 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 13 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 34 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 48 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 12 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 44 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 57 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 5 | [Download](20/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 218 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
frank-chieng/python_datamining | 2023-10-09T20:52:06.000Z | [
"region:us"
] | frank-chieng | null | null | null | 0 | 0 | Entry not found |
phongmt184172/python_code_version2 | 2023-10-06T04:46:06.000Z | [
"region:us"
] | phongmt184172 | null | null | null | 0 | 0 | Entry not found |
BangumiBase/fireforce | 2023-10-06T08:11:49.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Fire Force
This is the image base of bangumi Fire Force, we detected 60 characters, 5217 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1278 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 231 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 55 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 65 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 89 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 30 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 73 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 140 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 47 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 56 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 264 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 25 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 41 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 173 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 73 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 35 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 70 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 23 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 26 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 57 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 34 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 156 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 29 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 218 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 34 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 67 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 20 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 42 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 33 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 69 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 34 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 41 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 177 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 36 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 299 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 52 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 135 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 26 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 26 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 5 | [Download](40/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 41 | 25 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 15 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 12 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 22 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 17 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 85 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 12 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 85 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 33 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 37 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 17 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 122 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 25 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 60 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 13 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 6 | [Download](56/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 57 | 5 | [Download](57/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 58 | 10 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 209 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/zetsuennotempest | 2023-10-06T06:18:35.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Zetsuen No Tempest
This is the image base of bangumi Zetsuen no Tempest, we detected 16 characters, 2070 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 75 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 405 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 19 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 435 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 15 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 35 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 40 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 124 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 10 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 28 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 402 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 88 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 55 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 40 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 175 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 124 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Intuit-GenSRF/AnikaBasu-CyberbullyingDataset-es | 2023-10-06T19:33:44.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: __index_level_0__
dtype: int64
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 1407598
num_examples: 2955
download_size: 0
dataset_size: 1407598
---
# Dataset Card for "AnikaBasu-CyberbullyingDataset-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/violetevergarden | 2023-10-06T08:00:10.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Violet Evergarden
This is the image base of bangumi Violet Evergarden, we detected 72 characters, 4728 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 32 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 166 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 16 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 23 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 160 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 66 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 17 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 13 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 23 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 15 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 38 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 30 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 31 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 23 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 34 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 104 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 23 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 103 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 75 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 32 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 27 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 21 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 25 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 19 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 13 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 22 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 180 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 39 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 16 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 156 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 21 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 70 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 23 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 180 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 12 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 39 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 58 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 108 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 27 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 38 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 32 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 335 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 84 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 16 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 94 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 19 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 67 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 20 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 79 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 17 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 9 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 6 | [Download](52/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 53 | 34 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 12 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 31 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 33 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 17 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 24 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 1282 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 20 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 10 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 8 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 20 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 12 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 14 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 80 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 9 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 14 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 10 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 9 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 178 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BirdL/DONOTUSEDATA-SideB | 2023-10-07T21:46:48.000Z | [
"not-for-all-audiences",
"region:us"
] | BirdL | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sexual
dtype: float64
- name: hate
dtype: float64
- name: violence
dtype: float64
- name: self-harm
dtype: float64
- name: sexual/minors
dtype: float64
- name: hate/threatening
dtype: float64
- name: violence/graphic
dtype: float64
splits:
- name: train
num_bytes: 6855523
num_examples: 30002
download_size: 5665789
dataset_size: 6855523
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- not-for-all-audiences
---
# Dataset Card for "DONOTUSEDATA-SideB"
Studying the effects of harmful data on LLMs. Side B.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skbose-fold/wizmap-datasets | 2023-10-06T09:52:59.000Z | [
"region:us"
] | skbose-fold | null | null | null | 0 | 0 | Entry not found |
HotDaddy/hdv2 | 2023-10-06T06:46:30.000Z | [
"region:us"
] | HotDaddy | null | null | null | 0 | 0 | Entry not found |
shivani-bhoi2002/ProjectDataset | 2023-10-06T06:56:39.000Z | [
"license:unknown",
"region:us"
] | shivani-bhoi2002 | null | null | null | 0 | 0 | ---
license: unknown
---
|
ai4ce/EgoPAT3Dv2 | 2023-10-10T07:39:49.000Z | [
"language:en",
"robotics",
"region:us"
] | ai4ce | null | null | null | 0 | 0 | ---
language:
- en
tags:
- robotics
--- |
minh21/COVID-QA-question-answering-biencoder-data-75_25 | 2023-10-06T07:38:59.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 59010693
num_examples: 1348
- name: validation
num_bytes: 4567041
num_examples: 158
download_size: 13833996
dataset_size: 63577734
---
# Dataset Card for "COVID-QA-question-answering-biencoder-data-75_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-testset-biencoder-data-65_25_10 | 2023-10-06T07:47:57.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 16708455
num_examples: 201
download_size: 442083
dataset_size: 16708455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-biencoder-data-65_25_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-question-answering-biencoder-data-65_25_10 | 2023-10-06T07:48:19.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 55383294
num_examples: 1170
- name: validation
num_bytes: 5172033
num_examples: 140
download_size: 16954453
dataset_size: 60555327
---
# Dataset Card for "COVID-QA-question-answering-biencoder-data-65_25_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flozi00/single-queries-german | 2023-10-07T07:39:21.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:de",
"license:apache-2.0",
"region:us"
] | flozi00 | null | null | null | 0 | 0 | ---
language:
- de
license: apache-2.0
task_categories:
- text2text-generation
- text-generation
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10067
num_examples: 51
download_size: 0
dataset_size: 10067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "single-queries-german"
[Converted from here](https://github.com/flozi00/atra/blob/main/_selfquery.txt) |
metral/ranobe_sample | 2023-10-06T08:25:32.000Z | [
"language:ja",
"license:apache-2.0",
"region:us"
] | metral | null | null | null | 0 | 0 | ---
license: apache-2.0
language:
- ja
---
# What is this?
This is the text of my novel. It has approximately 240,000 words.
The genre is fantasy light novel.
# What is the licence?
The licence type is Apache 2.0.
# How can I use it?
I want you to use this novel as a sample of Japanese writing.
After that, you are free to use it within the scope of the licence.
You can send me fan letters :)
# Are there any precautions I should be aware of?
This text is still available on Kakuyom. The unique format for its publication has been retained. Please note that some of the formatting, such as ruby and highlighted characters, are not found in normal Japanese texts.
* https://kakuyomu.jp/help/entry/notation
# Others.
If you have any questions, please feel free to contact the HuggingFace community. |
minh21/COVID-QA-testset-biencoder-data-45_45_10 | 2023-10-06T08:08:08.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 16708455
num_examples: 201
download_size: 442083
dataset_size: 16708455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-biencoder-data-45_45_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-question-answering-biencoder-data-45_45_10 | 2023-10-06T08:08:24.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 40708361
num_examples: 814
- name: validation
num_bytes: 5112241
num_examples: 94
download_size: 12639574
dataset_size: 45820602
---
# Dataset Card for "COVID-QA-question-answering-biencoder-data-45_45_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Iceclear/StableSR-TestSets | 2023-10-06T08:46:40.000Z | [
"task_categories:image-to-image",
"license:other",
"arxiv:2305.07015",
"region:us"
] | Iceclear | null | null | null | 1 | 0 | ---
license: other
license_name: ntu-slab-license
license_link: https://github.com/IceClear/StableSR/blob/main/LICENSE.txt
task_categories:
- image-to-image
---
# StableSR TestSets Card
These test sets are used associated with the StableSR, available [here](https://github.com/IceClear/StableSR).
## Data Details
- **Developed by:** Jianyi Wang
- **Data type:** Synthetic and real-world test sets for image super-resolution
- **License:** [S-Lab License 1.0](https://github.com/IceClear/StableSR/blob/main/LICENSE.txt)
- **Data Description:** The test sets are used to reproduce the metric results shown in [Paper](https://arxiv.org/abs/2305.07015).
- **Resources for more information:** [GitHub Repository](https://github.com/IceClear/StableSR).
- **Cite as:**
@InProceedings{wang2023exploiting,
author = {Wang, Jianyi and Yue, Zongsheng and Zhou, Shangchen and Chan, Kelvin CK and Loy, Chen Change},
title = {Exploiting Diffusion Prior for Real-World Image Super-Resolution},
booktitle = {arXiv preprint arXiv:2305.07015},
year = {2023},
}
# Uses
Please refer to [S-Lab License 1.0](https://github.com/IceClear/StableSR/blob/main/LICENSE.txt)
We currently provide the following test sets:
- DIV2K_Val: 3000 synthetic data pairs on the validation of [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/) generated used the same degradation used for training StableSR.
- RealSR Val: Center-cropped data pairs on [RealSRv3](https://github.com/csjcai/RealSR).
- DRealSR Val: Center-cropped data pairs on [DRealSR](https://github.com/xiezw5/Component-Divide-and-Conquer-for-Real-World-Image-Super-Resolution).
- DPED Val: Center-cropped LQ-only data on [DPED](https://github.com/aiff22/DPED).
## Evaluation Results
See [Paper](https://arxiv.org/abs/2305.07015) for details. |
gillkabir/expanded_dataset2 | 2023-10-06T08:29:24.000Z | [
"region:us"
] | gillkabir | null | null | null | 0 | 0 | Entry not found |
pphuc25/data_trigger | 2023-10-06T08:43:55.000Z | [
"region:us"
] | pphuc25 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 469024
num_examples: 3400
- name: test
num_bytes: 77263
num_examples: 600
download_size: 316166
dataset_size: 546287
---
# Dataset Card for "data_trigger"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andrewlee1807/Gyeonggi | 2023-10-06T08:51:57.000Z | [
"license:apache-2.0",
"region:us"
] | andrewlee1807 | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
gbarone77/camoscio | 2023-10-06T09:10:51.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:it",
"region:us"
] | gbarone77 | null | null | null | 0 | 0 | ---
task_categories:
- text-generation
language:
- it
size_categories:
- 10K<n<100K
--- |
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part0_test | 2023-10-06T09:08:02.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 13428556
num_examples: 1408
download_size: 16665816
dataset_size: 13428556
---
# Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part0_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part1_test | 2023-10-06T09:08:19.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 12230881
num_examples: 1283
download_size: 14966255
dataset_size: 12230881
---
# Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part1_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/angelsofdeath | 2023-10-06T10:20:16.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Angels Of Death
This is the image base of bangumi Angels of Death, we detected 8 characters, 1201 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 621 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 243 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 80 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 15 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 92 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 84 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 8 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 58 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/littlewitchacademia | 2023-10-06T10:59:11.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Little Witch Academia
This is the image base of bangumi Little Witch Academia, we detected 41 characters, 3200 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 803 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 62 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 61 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 26 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 12 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 106 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 63 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 35 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 16 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 21 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 181 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 28 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 21 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 61 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 26 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 11 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 40 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 115 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 27 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 11 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 41 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 16 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 189 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 8 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 21 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 31 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 27 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 111 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 265 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 30 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 21 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 29 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 66 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 35 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 20 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 41 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 38 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 30 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 11 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 8 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 436 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/nichijou | 2023-10-06T10:48:11.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Nichijou
This is the image base of bangumi Nichijou, we detected 33 characters, 2652 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 346 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 16 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 51 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 449 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 105 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 10 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 75 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 91 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 73 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 16 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 479 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 33 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 72 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 75 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 79 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 19 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 17 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 80 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 30 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 181 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 16 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 15 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 36 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 100 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 13 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 33 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 14 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 7 | [Download](27/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 28 | 9 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 14 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 12 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 22 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 64 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Falah/mosque_forest_image_prompts | 2023-10-06T09:35:08.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3517254
num_examples: 10000
download_size: 150520
dataset_size: 3517254
---
# Dataset Card for "mosque_forest_image_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flozi00/classify-llm-tasks-german | 2023-10-07T07:39:24.000Z | [
"region:us"
] | flozi00 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1435
num_examples: 29
download_size: 2194
dataset_size: 1435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "classify-llm-tasks-german"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/eizoukenniwateodasuna | 2023-10-06T10:40:16.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Eizouken Ni Wa Te O Dasu Na!
This is the image base of bangumi Eizouken ni wa Te o Dasu na!, we detected 17 characters, 1057 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 235 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 290 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 225 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 16 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 28 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 38 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 30 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 23 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 12 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 13 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 12 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 10 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 12 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 8 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 42 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 53 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
DGurgurov/maltese_data | 2023-10-07T13:20:05.000Z | [
"region:us"
] | DGurgurov | null | null | null | 0 | 0 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc/guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
### Dataset Summary
This repository comprises two distinct datasets focusing on Maltese:
1. **Maltese Words and Their Relationships from ConceptNet**
This dataset includes Maltese words and their respective relationships, sourced from ConceptNet.
2. **Maltese Words and Their English Glosses from Gabra**
Dataset containing Maltese words and their corresponding English glosses, extracted from the Gabra database.
### Languages
- Maltese
## Dataset Creation
- The data was extracted from ConceptNet and Gabra for further use in training PPMI embeddings.
### Contributors
- Daniil Gurgurov
|
Back-up/html_data | 2023-10-06T10:02:21.000Z | [
"region:us"
] | Back-up | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1469722
num_examples: 11383
download_size: 285015
dataset_size: 1469722
---
# Dataset Card for "html_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DGurgurov/maltese_embeddings | 2023-10-07T13:16:44.000Z | [
"region:us"
] | DGurgurov | null | null | null | 0 | 0 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc/guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
### Dataset Summary
This repository contains three distinct datasets focusing on Maltese word embeddings:
1. **GloVe Maltese Word Embeddings**
Embeddings generated using GloVe on the "korpus_malti" dataset, the largest Maltese corpus available.
2. **Word2Vec Maltese Word Embeddings**
Word embeddings for Maltese obtained using Word2Vec trained on the "korpus_malti" dataset.
3. **PPMI Maltese Word Embeddings**
Pointwise Mutual Information (PPMI) based word embeddings generated from ConceptNet data via SVD on the co-occurrence matrix.
### Languages
- Maltese
## Dataset Creation
- GloVe and Word2Vec embeddings were trained using the largest Maltese dataset, "korpus_malti".
- Details of the training parameters for both GloVe and Word2Vec models can be found in the provided scripts.
- PPMI embeddings were trained using ConceptNet data and applying SVD on the co-occurrence matrix.
### Contributors
- Daniil Gurgurov
|
BangumiBase/popteamepic | 2023-10-06T11:24:35.000Z | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Pop Team Epic
This is the image base of bangumi POP TEAM EPIC, we detected 15 characters, 353 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 35 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 13 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 9 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 6 | [Download](3/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 4 | 13 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 15 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 48 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 15 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 77 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 14 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 10 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 8 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 13 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 11 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 66 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
shijli/enwik8 | 2023-10-06T11:13:00.000Z | [
"region:us"
] | shijli | null | null | null | 0 | 0 | Entry not found |
CyberHarem/erza_scarlet_fairytail | 2023-10-06T11:40:12.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of erza_scarlet_fairytail
This is the dataset of erza_scarlet_fairytail, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 427 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 434 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 427 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 427 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 166 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 434 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 434 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
stephaniecs/clean_dataset | 2023-10-06T11:40:44.000Z | [
"region:us"
] | stephaniecs | null | null | null | 0 | 0 | Entry not found |
stephaniecs/demo_dataset | 2023-10-06T11:43:23.000Z | [
"region:us"
] | stephaniecs | null | null | null | 0 | 0 | Entry not found |
cointegrated/nli-rus-translated-v2021 | 2023-10-06T14:51:23.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"size_categories:1M<n<10M",
"language:ru",
"region:us"
] | cointegrated | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
- name: source
dtype: string
- name: split
dtype: string
- name: premise_ru
dtype: string
- name: hypothesis_ru
dtype: string
- name: reverse_entailment_score
dtype: float64
- name: len_ratio
dtype: float64
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 1156491691
num_examples: 1756548
- name: dev
num_bytes: 78632908
num_examples: 106557
- name: test
num_bytes: 30464486
num_examples: 34615
download_size: 504709758
dataset_size: 1265589085
task_categories:
- text-classification
task_ids:
- natural-language-inference
language:
- ru
size_categories:
- 1M<n<10M
---
# Dataset Card for "nli-rus-translated-v2021"
This dataset was introduced in the Habr post
["Нейросети для Natural Language Inference (NLI): логические умозаключения на русском языке"](https://habr.com/ru/articles/582620/).
It is composed from various English NLI datasets automatically translated into Russian.
Here are the sizes of the source datasets included into different splits:
| source | train | dev | test |
|:------------|--------:|------:|-------:|
| add_one_rte | 4991 | 387 | 0 |
| anli_r1 | 16946 | 1000 | 1000 |
| anli_r2 | 45460 | 1000 | 1000 |
| anli_r3 | 100459 | 1200 | 1200 |
| copa | 800 | 200 | 0 |
| fever | 162330 | 20478 | 20343 |
| help | 29347 | 3355 | 3189 |
| iie | 281643 | 31232 | 0 |
| imppres | 10179 | 7661 | 7660 |
| joci | 8412 | 939 | 0 |
| mnli | 392662 | 19647 | 0 |
| monli | 2186 | 269 | 223 |
| mpe | 9000 | 1000 | 0 |
| qnli | 108436 | 5732 | 0 |
| scitail | 24900 | 2126 | 0 |
| sick | 9500 | 500 | 0 |
| snli | 549297 | 9831 | 0 |
Most of the original data were taken from the repository [felipessalvatore/NLI_datasets](https://github.com/felipessalvatore/NLI_datasets).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ahmetalper/test | 2023-10-06T12:10:43.000Z | [
"region:us"
] | ahmetalper | null | null | null | 0 | 0 | Entry not found |
julia-neugarten/FSNNA2023 | 2023-10-06T12:02:05.000Z | [
"region:us"
] | julia-neugarten | null | null | null | 0 | 0 | Entry not found |
HamdanXI/difference_analysis | 2023-10-06T12:10:59.000Z | [
"region:us"
] | HamdanXI | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
- name: edit_ops
list:
- name: content
dtype: string
- name: operation
dtype: string
- name: position
dtype: int64
- name: replacement_content
dtype: string
splits:
- name: train
num_bytes: 4067122
num_examples: 19744
download_size: 1959427
dataset_size: 4067122
---
# Dataset Card for "difference_analysis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
symanto/autextification2023 | 2023-10-06T13:08:55.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"source_datasets:multi_eurlex",
"source_datasets:xsum",
"source_datasets:csebuetnlp/xlsum",
"source_datasets:mlsum",
"source_datasets:amazon_polarity",
"source_datasets:https://sinai.ujaen.es/investigacion/recursos/coah",
"source_datasets:https://sinai.ujaen.es/investigacion/recursos/coar",
"source_datasets:carblacac/twitter-sentiment-analysis",
"source_datasets:cardiffnlp/tweet_sentiment_multilingual",
"source_datasets:https://www.kaggle.com/datasets/ricardomoya/tweets-poltica-espaa",
"source_datasets:wiki_lingua",
"language:en",
"language:es",
"license:cc-by-nc-sa-4.0",
"arxiv:2309.11285",
"region:us"
] | symanto | null | null | null | 0 | 0 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
- es
pretty_name: AuTexTification 2023
size_categories:
- 10K<n<100K
source_datasets:
- multi_eurlex
- xsum
- csebuetnlp/xlsum
- mlsum
- amazon_polarity
- https://sinai.ujaen.es/investigacion/recursos/coah
- https://sinai.ujaen.es/investigacion/recursos/coar
- carblacac/twitter-sentiment-analysis
- cardiffnlp/tweet_sentiment_multilingual
- https://www.kaggle.com/datasets/ricardomoya/tweets-poltica-espaa
- wiki_lingua
---
# Dataset Card for AuTexTification 2023
## Dataset Description
- **Homepage:** https://sites.google.com/view/autextification
- **Repository:** https://github.com/autextification/AuTexTification-Overview
- **Paper:** https://arxiv.org/abs/2309.11285
### Dataset Summary
AuTexTification 2023 @IberLEF2023 is a shared task focusing in Machine-Generated Text Detection and Model Attribution in English and Spanish.
The dataset includes human and generated text in 5 domains: tweets, reviews, how-to articles, news, and legal documents.
The generations are obtained using six language models: BLOOM-1B1, BLOOM-3B, BLOOM-7B1, Babbage, Curie, and text-davinci-003.
For more information, please refer to our overview paper: https://arxiv.org/abs/2309.11285
### Supported Tasks and Leaderboards
- Machine-Generated Text Detection
- Model Attribution
### Languages
English and Spanish
## Dataset Structure
### Data Instances
163k instances of labeled text in total.
### Data Fields
For MGT Detection:
- id
- prompt
- text
- label
- model
- domain
For Model Attribution:
- id
- prompt
- text
- label
- domain
### Data Splits
- MGT Detection Data:
| Language | Split | Human | Generated | Total |
| -------- | ----- | ------ | --------- | ------ |
| English | Train | 17.046 | 16.799 | 33.845 |
| | Test | 10.642 | 11.190 | 21.832 |
| | Total | 27.688 | 27.989 | |
| Spanish | Train | 15.787 | 16.275 | 32.062 |
| | Test | 11.209 | 8.920 | 20.129 |
| | Total | 26.996 | 25.195 | |
- Model Attribution Data:
| | | BLOOM | | | GPT | | | |
| -------- | ----- | ----- | ----- | ----- | ------- | ----- | ---------------- | ------ |
| Language | Split | 1B7 | 3B | 7B | babbage | curie | text-davinci-003 | Total |
| English | Train | 3.562 | 3.648 | 3.687 | 3.870 | 3.822 | 3.827 | 14.767 |
| | Test | 887 | 875 | 952 | 924 | 979 | 988 | 3.638 |
| | Total | 4.449 | 4.523 | 4.639 | 4.794 | 4.801 | 4.815 | |
| Spanish | Train | 3.422 | 3.514 | 3.575 | 3.788 | 3.770 | 3.866 | 14.299 |
| | Test | 870 | 867 | 878 | 946 | 1.004 | 917 | 3.561 |
| | Total | 4.292 | 4.381 | 4.453 | 4.734 | 4.774 | 4.783 | |
## Dataset Creation
### Curation Rationale
Human data was gathered and used to prompt language models, obtaining generated data.
Specific decisions were made to ensure the data gathering process was carried out in an unbiased manner, making the final human and generated texts probable continuations of a given prefix.
For more detailed information, please refer to the overview paper: https://arxiv.org/abs/2309.11285
### Source Data
The following datasets were used as human text:
- multi_eurlex
- xsum
- csebuetnlp/xlsum
- mlsum
- amazon_polarity
- https://sinai.ujaen.es/investigacion/recursos/coah
- https://sinai.ujaen.es/investigacion/recursos/coar
- carblacac/twitter-sentiment-analysis
- cardiffnlp/tweet_sentiment_multilingual
- https://www.kaggle.com/datasets/ricardomoya/tweets-poltica-espaa
- wiki_lingua
These datasets were only used as sources of human text. The labels of the datasets were not employed in any manner.
### Licensing Information
CC-BY-NC-SA-4.0
### Citation Information
```
@inproceedings{autextification2023,
title = "Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains",
author = "Sarvazyan, Areg Mikael and
Gonz{\'a}lez, Jos{\'e} {\'A}ngel and
Franco-Salvador, Marc and
Rangel, Francisco and
Chulvi, Berta and
Rosso, Paolo",
month = sep,
year = "2023",
address = "Jaén, Spain",
booktitle = "Procesamiento del Lenguaje Natural",
}
``` |
flozi00/qa-tasks-german | 2023-10-07T07:39:26.000Z | [
"region:us"
] | flozi00 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 43467
num_examples: 4
download_size: 0
dataset_size: 43467
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qa-tasks-german"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
baebee/test_questions_self_instruct | 2023-10-06T12:37:58.000Z | [
"region:us"
] | baebee | null | null | null | 0 | 0 | Entry not found |
asoria/copy-BRAD | 2023-10-06T12:39:29.000Z | [
"region:us"
] | asoria | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: first_sentence
dtype: string
- name: second_sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: 0
1: 1
splits:
- name: train
num_bytes: 1420233
num_examples: 10000
- name: validation
num_bytes: 133986
num_examples: 1000
download_size: 837486
dataset_size: 1554219
---
# Dataset Card for "Commonsense_Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/nouminkanrennoskillbakkaagetetaranazekatsuyokunatta | 2023-10-06T13:38:28.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Noumin Kanren No Skill Bakka Agetetara Naze Ka Tsuyoku Natta
This is the image base of bangumi Noumin Kanren no Skill Bakka Agetetara Naze ka Tsuyoku Natta, we detected 32 characters, 1564 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 22 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 102 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 21 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 15 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 41 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 543 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 29 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 24 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 21 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 128 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 22 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 32 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 15 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 10 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 34 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 14 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 11 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 19 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 14 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 19 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 24 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 22 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 41 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 15 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 103 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 30 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 22 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 6 | [Download](29/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 30 | 5 | [Download](30/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 137 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Vojtab42/guanaco-llama2-1k | 2023-10-06T12:43:59.000Z | [
"region:us"
] | Vojtab42 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_5_lang_DA3_tokenized | 2023-10-06T12:49:41.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 419259395
num_examples: 552890
download_size: 127212717
dataset_size: 419259395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA3_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TottiPick/repo_name | 2023-10-06T13:10:36.000Z | [
"region:us"
] | TottiPick | null | null | null | 0 | 0 | Entry not found |
TottiPick/melody_extraction | 2023-10-06T13:11:05.000Z | [
"region:us"
] | TottiPick | null | null | null | 0 | 0 | Entry not found |
TottiPick/melody_extraction_huggingface | 2023-10-06T13:22:34.000Z | [
"region:us"
] | TottiPick | null | null | null | 0 | 0 | Entry not found |
ebony59/chai_feedback | 2023-10-06T13:13:48.000Z | [
"region:us"
] | ebony59 | null | null | null | 0 | 0 | ---
dataset_info:
features: []
splits:
- name: train
download_size: 324
dataset_size: 0
---
# Dataset Card for "chai_feedback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ebony59/chai_guanaco_feedback | 2023-10-06T15:45:55.000Z | [
"region:us"
] | ebony59 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: character
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4081747
num_examples: 1979
download_size: 0
dataset_size: 4081747
---
# Dataset Card for "chai_guanaco_feedback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcus2000/timelist_task_dataset | 2023-10-06T13:21:49.000Z | [
"region:us"
] | marcus2000 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Original
dtype: string
- name: Task
dtype: string
splits:
- name: train
num_bytes: 91073.55102040817
num_examples: 41
- name: test
num_bytes: 17770.448979591838
num_examples: 8
download_size: 62081
dataset_size: 108844.0
---
# Dataset Card for "timelist_task_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TomerMassas/melody_dataset | 2023-10-08T20:06:47.000Z | [
"region:us"
] | TomerMassas | null | null | null | 0 | 0 | Entry not found |
lunarflu/Developing_LLMs_Open_Closed_or_Democratic | 2023-10-06T13:28:18.000Z | [
"region:us"
] | lunarflu | null | null | null | 0 | 0 | https://x.com/natolambert/status/1710285440803344688?s=20 |
lissadesu/codeqa_v2 | 2023-10-06T13:38:30.000Z | [
"region:us"
] | lissadesu | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: labNo
dtype: float64
- name: taskNo
dtype: float64
- name: questioner
dtype: string
- name: question
dtype: string
- name: code
dtype: string
- name: startLine
dtype: float64
- name: endLine
dtype: float64
- name: questionType
dtype: string
- name: answer
dtype: string
- name: src
dtype: string
- name: code_processed
dtype: string
- name: id
dtype: string
- name: raw_code
dtype: string
- name: raw_comment
dtype: string
- name: comment
dtype: string
- name: q_code
dtype: string
splits:
- name: train
num_bytes: 46842820
num_examples: 35360
download_size: 17749500
dataset_size: 46842820
---
# Dataset Card for "codeqa_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
srinuksv/ap_schems | 2023-10-06T13:38:57.000Z | [
"license:mit",
"region:us"
] | srinuksv | null | null | null | 0 | 0 | ---
license: mit
---
|
deepghs/anime_portrait | 2023-10-08T17:26:21.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"art",
"not-for-all-audiences",
"region:us"
] | deepghs | null | null | null | 0 | 0 | ---
license: openrail
task_categories:
- image-classification
tags:
- art
- not-for-all-audiences
size_categories:
- 10K<n<100K
--- |
asoria/test_s3 | 2023-10-06T13:50:57.000Z | [
"task_categories:image-classification",
"annotations_creators:machine-generated",
"size_categories:10K<n<100K",
"source_datasets:extended|mnist",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2207.10495",
"region:us"
] | asoria | The images were created such that they have an unclear ground truth,
i.e., such that they are similar to multiple - but not all - of the datasets classes.
Robust and uncertainty-aware models should be able to detect and flag these ambiguous images.
As such, the dataset should be merged / mixed with the original dataset and we
provide such 'mixed' splits for convenience. Please refer to the dataset card for details. | @misc{https://doi.org/10.48550/arxiv.2207.10495,
doi = {10.48550/ARXIV.2207.10495},
url = {https://arxiv.org/abs/2207.10495},
author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
publisher = {arXiv},
year = {2022}
} | null | 0 | 0 | ---
license: cc-by-sa-3.0
task_categories:
- image-classification
language:
- en
pretty_name: mnist_ambigous
size_categories:
- 10K<n<100K
source_datasets:
- extended|mnist
annotations_creators:
- machine-generated
---
# Mnist-Ambiguous
This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.
Robust and uncertainty-aware DNNs should thus detect and flag these issues.
### Features
Same as mnist, the supervised dataset has an `image` (28x28 int array) and a `label` (int).
Additionally, the following features are exposed for your convenience:
- `text_label` (str): A textual representation of the probabilistic label, e.g. `p(0)=0.54, p(5)=0.46`
- `p_label` (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
- `is_ambiguous` (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)
### Splits
We provide four splits:
- `test`: 10'000 ambiguous images
- `train`: 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
- `test_mixed`: 20'000 images, consisting of the (shuffled) concatenation of our ambiguous `test` set and the nominal mnist test set by LeCun et. al.,
- `train_mixed`: 70'000 images, consisting of the (shuffled) concatenation of our ambiguous `training` and the nominal training set.
Note that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods),
the training set images allow for more unbalanced ambiguity.
This is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.
For research targeting explicitly aleatoric uncertainty, we recommend training the model using `train_mixed`.
Otherwise, our `test` set will lead to both epistemic and aleatoric uncertainty.
In related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.
### Assessment and Validity
For a brief discussion of the strength and weaknesses of this dataset,
including a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.
### Paper
Pre-print here: [https://arxiv.org/abs/2207.10495](https://arxiv.org/abs/2207.10495)
Citation:
```
@misc{https://doi.org/10.48550/arxiv.2207.10495,
doi = {10.48550/ARXIV.2207.10495},
url = {https://arxiv.org/abs/2207.10495},
author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
publisher = {arXiv},
year = {2022}
}
```
### License
As this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license.
|
lissadesu/codeqa_v3 | 2023-10-06T13:53:22.000Z | [
"region:us"
] | lissadesu | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: labNo
dtype: float64
- name: taskNo
dtype: float64
- name: questioner
dtype: string
- name: question
dtype: string
- name: code
dtype: string
- name: startLine
dtype: float64
- name: endLine
dtype: float64
- name: questionType
dtype: string
- name: answer
dtype: string
- name: src
dtype: string
- name: code_processed
dtype: string
- name: id
dtype: string
- name: raw_code
dtype: string
- name: raw_comment
dtype: string
- name: comment
dtype: string
- name: q_code
dtype: string
splits:
- name: train
num_bytes: 46848295
num_examples: 35360
download_size: 17749500
dataset_size: 46848295
---
# Dataset Card for "codeqa_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/beasttamer | 2023-10-06T15:03:07.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Beast Tamer
This is the image base of bangumi Beast Tamer, we detected 25 characters, 1727 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 46 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 24 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 15 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 411 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 13 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 8 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 12 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 17 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 8 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 201 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 25 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 41 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 21 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 17 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 317 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 231 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 10 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 14 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 50 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 22 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 37 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 14 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 38 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 125 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/justbecause | 2023-10-06T14:49:54.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Just Because!
This is the image base of bangumi Just Because!, we detected 20 characters, 1430 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 218 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 14 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 15 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 28 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 99 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 21 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 43 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 228 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 65 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 14 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 21 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 15 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 12 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 106 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 21 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 357 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 7 | [Download](16/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 17 | 23 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 23 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 100 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/unlimitedfafnir | 2023-10-06T14:46:23.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Unlimited Fafnir
This is the image base of bangumi Unlimited Fafnir, we detected 17 characters, 1386 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 31 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 135 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 28 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 417 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 74 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 59 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 45 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 38 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 125 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 56 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 151 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 119 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 13 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 9 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 45 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 18 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 23 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
lissadesu/codeqa_reduced | 2023-10-06T13:54:14.000Z | [
"region:us"
] | lissadesu | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: labNo
dtype: float64
- name: taskNo
dtype: float64
- name: questioner
dtype: string
- name: question
dtype: string
- name: code
dtype: string
- name: startLine
dtype: float64
- name: endLine
dtype: float64
- name: questionType
dtype: string
- name: answer
dtype: string
- name: src
dtype: string
- name: code_processed
dtype: string
- name: id
dtype: string
- name: raw_code
dtype: string
- name: raw_comment
dtype: string
- name: comment
dtype: string
- name: q_code
dtype: string
splits:
- name: train
num_bytes: 39821050.75
num_examples: 30056
- name: test
num_bytes: 7027244.25
num_examples: 5304
download_size: 23830741
dataset_size: 46848295.0
---
# Dataset Card for "codeqa_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adityarra07/zurich_data | 2023-10-06T14:16:10.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 537406557.186
num_examples: 2189
download_size: 535954349
dataset_size: 537406557.186
---
# Dataset Card for "zurich_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/idolish7 | 2023-10-06T15:53:59.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Idolish7
This is the image base of bangumi IDOLiSH7, we detected 27 characters, 3443 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 307 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 58 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 281 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 323 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 116 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 23 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 88 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 289 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 91 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 329 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 379 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 70 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 21 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 17 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 17 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 293 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 439 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 12 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 8 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 18 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 6 | [Download](20/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 21 | 9 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 14 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 7 | [Download](23/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 24 | 10 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 6 | [Download](25/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 212 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
mllab/dataset | 2023-10-06T14:44:15.000Z | [
"region:us"
] | mllab | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: categories
sequence: int64
splits:
- name: train
num_bytes: 1154967823.0
num_examples: 266
- name: test
num_bytes: 1023487639.0
num_examples: 386
download_size: 2167702834
dataset_size: 2178455462.0
---
# Dataset Card for "dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hacktoberfest-corpus-es/spanish_dish_title | 2023-10-06T14:44:41.000Z | [
"license:mit",
"region:us"
] | hacktoberfest-corpus-es | null | null | null | 0 | 0 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 123357511.5769398
num_examples: 13170
- name: test
num_bytes: 6295620.691672235
num_examples: 659
- name: valid
num_bytes: 24795318.75338796
num_examples: 2634
download_size: 156595985
dataset_size: 154448451.022
---
|
Xenova/cmu-arctic-xvectors-extracted | 2023-10-06T14:59:01.000Z | [
"region:us"
] | Xenova | null | null | null | 0 | 0 | Entry not found |
asdfaasdfs/outlier | 2023-10-06T15:04:13.000Z | [
"region:us"
] | asdfaasdfs | null | null | null | 0 | 0 | Entry not found |
AbeShinzo0708/KishidaFumio_voice_data | 2023-10-06T15:09:28.000Z | [
"license:openrail",
"岸田文雄",
"KishidaFumio",
"region:us"
] | AbeShinzo0708 | null | null | null | 0 | 0 | ---
license: openrail
tags:
- 岸田文雄
- KishidaFumio
pretty_name: 岸田文雄
--- |
Ekiel/monDataSet | 2023-10-06T15:23:32.000Z | [
"region:us"
] | Ekiel | null | null | null | 0 | 0 | Entry not found |
zchmielewska/illustrative_life_table | 2023-10-06T15:32:40.000Z | [
"license:mit",
"region:us"
] | zchmielewska | null | null | null | 0 | 0 | ---
license: mit
---
|
BangumiBase/nobunagathefool | 2023-10-06T17:45:19.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Nobunaga The Fool
This is the image base of bangumi NOBUNAGA THE FOOL, we detected 36 characters, 2812 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 8 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 69 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 122 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 248 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 48 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 22 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 85 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 18 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 467 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 267 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 50 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 55 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 31 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 15 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 48 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 46 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 30 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 45 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 178 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 36 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 217 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 100 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 26 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 25 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 13 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 134 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 13 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 8 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 10 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 36 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 15 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 10 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 14 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 13 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 277 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Felix9390/QuestionAnsweringbot | 2023-10-06T15:54:09.000Z | [
"region:us"
] | Felix9390 | null | null | null | 0 | 0 | Entry not found |
ai4ce/CoVPR | 2023-10-06T15:57:27.000Z | [
"license:apache-2.0",
"region:us"
] | ai4ce | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Hack90/ncbi_genbank_part_48 | 2023-10-06T16:37:04.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 37408573115
num_examples: 3590
download_size: 16355151548
dataset_size: 37408573115
---
# Dataset Card for "ncbi_genbank_part_48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_28 | 2023-10-07T01:52:13.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32754969732
num_examples: 1307
download_size: 14813299268
dataset_size: 32754969732
---
# Dataset Card for "ncbi_genbank_part_28"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_38 | 2023-10-06T23:59:57.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32814010540
num_examples: 1135
download_size: 0
dataset_size: 32814010540
---
# Dataset Card for "ncbi_genbank_part_38"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_18 | 2023-10-07T01:57:07.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 9462744773
num_examples: 13032669
download_size: 3869663931
dataset_size: 9462744773
---
# Dataset Card for "ncbi_genbank_part_18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thrshr/CC3M | 2023-10-06T16:24:28.000Z | [
"region:us"
] | thrshr | null | null | null | 0 | 0 | Entry not found |
BangumiBase/inuninattarasukinahitonihirowareta | 2023-10-06T17:34:24.000Z | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Inu Ni Nattara Suki Na Hito Ni Hirowareta
This is the image base of bangumi Inu ni Nattara Suki na Hito ni Hirowareta, we detected 9 characters, 406 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 67 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 92 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 14 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 11 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 32 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 74 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 44 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 49 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/zeronotsukaima | 2023-10-06T20:23:55.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Zero No Tsukaima
This is the image base of bangumi Zero no Tsukaima, we detected 64 characters, 7210 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1450 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 31 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 30 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 258 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 66 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 297 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 60 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 34 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 18 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 26 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 32 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 198 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 41 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 103 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 31 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 21 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 12 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 152 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 55 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 63 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 203 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 28 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 30 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 23 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 47 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 23 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 30 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 16 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 38 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 1772 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 30 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 14 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 29 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 266 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 36 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 15 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 24 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 16 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 41 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 195 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 23 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 78 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 154 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 18 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 333 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 33 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 24 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 23 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 29 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 27 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 28 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 19 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 8 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 10 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 9 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 7 | [Download](58/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 59 | 22 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 8 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 10 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 5 | [Download](62/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 425 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Hack90/ncbi_genbank_part_39 | 2023-10-06T17:00:42.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31553866013
num_examples: 1218
download_size: 14299220624
dataset_size: 31553866013
---
# Dataset Card for "ncbi_genbank_part_39"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_19 | 2023-10-07T02:18:28.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 10292374144
num_examples: 14539438
download_size: 4229601328
dataset_size: 10292374144
---
# Dataset Card for "ncbi_genbank_part_19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_29 | 2023-10-06T17:06:03.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 34358138224
num_examples: 38564
download_size: 15474999547
dataset_size: 34358138224
---
# Dataset Card for "ncbi_genbank_part_29"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_49 | 2023-10-06T17:19:48.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33647454754
num_examples: 87952
download_size: 15354172665
dataset_size: 33647454754
---
# Dataset Card for "ncbi_genbank_part_49"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/shadowshouse | 2023-10-06T18:04:47.000Z | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Shadows House
This is the image base of bangumi SHADOWS HOUSE, we detected 23 characters, 998 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 63 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 11 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 52 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 130 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 32 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 9 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 8 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 30 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 40 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 22 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 12 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 56 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 58 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 7 | [Download](13/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 9 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 293 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 7 | [Download](18/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 48 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 8 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 67 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/kaguyasamawakokurasetai | 2023-10-06T18:56:02.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Kaguya-sama Wa Kokurasetai
This is the image base of bangumi Kaguya-sama wa Kokurasetai, we detected 29 characters, 2797 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 530 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 16 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 38 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 29 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 15 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 11 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 242 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 651 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 33 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 29 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 69 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 30 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 25 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 23 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 19 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 25 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 15 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 178 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 11 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 101 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 25 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 68 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 263 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 26 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 19 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 8 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 8 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 5 | [Download](27/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 285 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Hack90/ncbi_genbank_part_40 | 2023-10-06T17:32:08.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 35016354335
num_examples: 80526
download_size: 15795680024
dataset_size: 35016354335
---
# Dataset Card for "ncbi_genbank_part_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_20 | 2023-10-07T02:38:50.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 11487399500
num_examples: 13928917
download_size: 4846727220
dataset_size: 11487399500
---
# Dataset Card for "ncbi_genbank_part_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.