id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
BangumiBase/nobunagathefool | 2023-10-06T17:45:19.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T15:49:34 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Nobunaga The Fool
This is the image base of bangumi NOBUNAGA THE FOOL, we detected 36 characters, 2812 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 8 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 69 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 122 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 248 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 48 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 22 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 85 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 18 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 467 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 267 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 50 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 55 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 31 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 15 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 48 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 46 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 30 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 45 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 178 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 36 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 217 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 100 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 26 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 25 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 13 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 134 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 13 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 8 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 10 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 36 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 15 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 10 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 14 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 13 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 277 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 12,519 | [
[
-0.04217529296875,
-0.01149749755859375,
0.00786590576171875,
0.0137481689453125,
-0.0159454345703125,
-0.004589080810546875,
-0.0010814666748046875,
-0.0228424072265625,
0.043548583984375,
0.031341552734375,
-0.0577392578125,
-0.051422119140625,
-0.040924072265... |
Felix9390/QuestionAnsweringbot | 2023-10-06T15:54:09.000Z | [
"region:us"
] | Felix9390 | null | null | 0 | 0 | 2023-10-06T15:54:09 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
sinagph/LDjnr-merged-formatted | 2023-10-06T16:06:06.000Z | [
"region:us"
] | sinagph | null | null | 0 | 0 | 2023-10-06T16:05:38 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Hack90/ncbi_genbank_part_48 | 2023-10-06T16:37:04.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T16:12:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 37408573115
num_examples: 3590
download_size: 16355151548
dataset_size: 37408573115
---
# Dataset Card for "ncbi_genbank_part_48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.043121337890625,
-0.009063720703125,
0.0157470703125,
0.01105499267578125,
-0.02655029296875,
0.0222320556640625,
0.0321044921875,
-0.0029048919677734375,
0.054840087890625,
0.033538818359375,
-0.05755615234375,
-0.06951904296875,
-0.024658203125,
0.00463... |
Hack90/ncbi_genbank_part_38 | 2023-10-06T23:59:57.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T16:17:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32814010540
num_examples: 1135
download_size: 0
dataset_size: 32814010540
---
# Dataset Card for "ncbi_genbank_part_38"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 632 | [
[
-0.049346923828125,
-0.00878143310546875,
0.0196685791015625,
0.01441192626953125,
-0.0300445556640625,
0.0213623046875,
0.0301361083984375,
-0.0019550323486328125,
0.0479736328125,
0.041229248046875,
-0.049407958984375,
-0.06182861328125,
-0.03082275390625,
... |
thrshr/CC3M | 2023-10-06T16:24:28.000Z | [
"region:us"
] | thrshr | null | null | 0 | 0 | 2023-10-06T16:24:28 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
BangumiBase/inuninattarasukinahitonihirowareta | 2023-10-06T17:34:24.000Z | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T16:34:30 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Inu Ni Nattara Suki Na Hito Ni Hirowareta
This is the image base of bangumi Inu ni Nattara Suki na Hito ni Hirowareta, we detected 9 characters, 406 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 67 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 92 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 14 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 11 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 32 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 74 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 44 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 49 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 4,083 | [
[
-0.047760009765625,
-0.01343536376953125,
0.0070037841796875,
0.0189208984375,
-0.0190887451171875,
-0.0028553009033203125,
0.0010213851928710938,
-0.0287628173828125,
0.038665771484375,
0.042510986328125,
-0.06060791015625,
-0.050689697265625,
-0.03866577148437... |
BangumiBase/zeronotsukaima | 2023-10-06T20:23:55.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T16:46:26 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Zero No Tsukaima
This is the image base of bangumi Zero no Tsukaima, we detected 64 characters, 7210 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1450 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 31 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 30 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 258 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 66 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 297 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 60 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 34 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 18 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 26 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 32 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 198 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 41 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 103 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 31 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 21 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 12 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 152 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 55 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 63 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 203 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 28 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 30 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 23 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 47 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 23 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 30 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 16 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 38 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 1772 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 30 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 14 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 29 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 266 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 36 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 15 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 24 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 16 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 41 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 195 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 23 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 78 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 154 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 18 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 333 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 33 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 24 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 23 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 29 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 27 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 28 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 19 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 8 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 10 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 9 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 7 | [Download](58/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 59 | 22 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 8 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 10 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 5 | [Download](62/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 425 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 21,309 | [
[
-0.04296875,
-0.00933837890625,
0.011993408203125,
0.01125335693359375,
-0.017578125,
-0.0045166015625,
-0.0008330345153808594,
-0.02301025390625,
0.040771484375,
0.032440185546875,
-0.059051513671875,
-0.054901123046875,
-0.041778564453125,
0.03350830078125... |
Hack90/ncbi_genbank_part_39 | 2023-10-06T17:00:42.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T16:46:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31553866013
num_examples: 1218
download_size: 14299220624
dataset_size: 31553866013
---
# Dataset Card for "ncbi_genbank_part_39"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.050628662109375,
-0.008941650390625,
0.0196075439453125,
0.0135345458984375,
-0.03070068359375,
0.0257110595703125,
0.024932861328125,
-0.0028820037841796875,
0.047332763671875,
0.036468505859375,
-0.053131103515625,
-0.062255859375,
-0.02783203125,
-0.00... |
Hack90/ncbi_genbank_part_29 | 2023-10-06T17:06:03.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T16:51:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 34358138224
num_examples: 38564
download_size: 15474999547
dataset_size: 34358138224
---
# Dataset Card for "ncbi_genbank_part_29"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 643 | [
[
-0.046478271484375,
-0.01552581787109375,
0.0166778564453125,
0.0212860107421875,
-0.0221099853515625,
0.026641845703125,
0.034698486328125,
-0.003078460693359375,
0.050506591796875,
0.036834716796875,
-0.046051025390625,
-0.06268310546875,
-0.031005859375,
... |
Hack90/ncbi_genbank_part_49 | 2023-10-06T17:19:48.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:00:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33647454754
num_examples: 87952
download_size: 15354172665
dataset_size: 33647454754
---
# Dataset Card for "ncbi_genbank_part_49"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 643 | [
[
-0.040679931640625,
-0.007289886474609375,
0.017425537109375,
0.01525115966796875,
-0.030670166015625,
0.025146484375,
0.032379150390625,
-0.00011152029037475586,
0.05499267578125,
0.03277587890625,
-0.050384521484375,
-0.066650390625,
-0.02557373046875,
0.0... |
BangumiBase/shadowshouse | 2023-10-06T18:04:47.000Z | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T17:08:27 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Shadows House
This is the image base of bangumi SHADOWS HOUSE, we detected 23 characters, 998 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 63 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 11 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 52 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 130 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 32 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 9 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 8 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 30 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 40 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 22 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 12 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 56 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 58 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 7 | [Download](13/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 9 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 293 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 7 | [Download](18/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 48 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 8 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 67 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 8,424 | [
[
-0.041351318359375,
-0.0085601806640625,
0.00873565673828125,
0.0128631591796875,
-0.01500701904296875,
-0.0020847320556640625,
-0.0004897117614746094,
-0.02197265625,
0.04010009765625,
0.035125732421875,
-0.057647705078125,
-0.055328369140625,
-0.04061889648437... |
BangumiBase/kaguyasamawakokurasetai | 2023-10-06T18:56:02.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T17:12:17 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Kaguya-sama Wa Kokurasetai
This is the image base of bangumi Kaguya-sama wa Kokurasetai, we detected 29 characters, 2797 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 530 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 16 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 38 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 29 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 15 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 11 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 242 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 651 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 33 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 29 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 69 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 30 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 25 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 23 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 19 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 25 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 15 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 178 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 11 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 101 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 25 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 68 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 263 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 26 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 19 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 8 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 8 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 5 | [Download](27/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 285 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 10,339 | [
[
-0.04302978515625,
-0.00946807861328125,
0.0105133056640625,
0.01322174072265625,
-0.0181427001953125,
-0.006443023681640625,
-0.0029087066650390625,
-0.0229644775390625,
0.039703369140625,
0.031158447265625,
-0.05877685546875,
-0.053802490234375,
-0.04348754882... |
Hack90/ncbi_genbank_part_40 | 2023-10-06T17:32:08.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:16:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 35016354335
num_examples: 80526
download_size: 15795680024
dataset_size: 35016354335
---
# Dataset Card for "ncbi_genbank_part_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 643 | [
[
-0.052978515625,
-0.00986480712890625,
0.0179443359375,
0.0145263671875,
-0.0284423828125,
0.0260009765625,
0.0283050537109375,
-0.006649017333984375,
0.049041748046875,
0.03387451171875,
-0.05023193359375,
-0.06298828125,
-0.0284423828125,
-0.00444412231445... |
Hack90/ncbi_genbank_part_30 | 2023-10-06T17:34:42.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:20:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32901664415
num_examples: 168885
download_size: 14692414119
dataset_size: 32901664415
---
# Dataset Card for "ncbi_genbank_part_30"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.049957275390625,
-0.014892578125,
0.0173187255859375,
0.019805908203125,
-0.0178680419921875,
0.0223388671875,
0.034637451171875,
-0.0029315948486328125,
0.04656982421875,
0.034576416015625,
-0.053009033203125,
-0.06427001953125,
-0.0269012451171875,
-0.0... |
Hack90/ncbi_genbank_part_50 | 2023-10-06T19:50:52.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:43:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 26640706946
num_examples: 3973282
download_size: 10256237433
dataset_size: 26640706946
---
# Dataset Card for "ncbi_genbank_part_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.049468994140625,
-0.0083160400390625,
0.0171051025390625,
0.0178375244140625,
-0.019317626953125,
0.023101806640625,
0.033416748046875,
-0.00023043155670166016,
0.04986572265625,
0.03192138671875,
-0.052337646484375,
-0.066650390625,
-0.0250701904296875,
... |
Hack90/ncbi_genbank_part_41 | 2023-10-06T17:59:33.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:45:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31049128200
num_examples: 273326
download_size: 13996445609
dataset_size: 31049128200
---
# Dataset Card for "ncbi_genbank_part_41"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.05303955078125,
-0.0086212158203125,
0.019134521484375,
0.0133056640625,
-0.03375244140625,
0.0232086181640625,
0.0304107666015625,
-0.00846099853515625,
0.05126953125,
0.038787841796875,
-0.0565185546875,
-0.0662841796875,
-0.0278778076171875,
0.00263595... |
autoevaluate/autoeval-eval-samsum-samsum-3cd2fc-93464145850 | 2023-10-06T18:00:28.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:48:21 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. | 800 | [
[
-0.0345458984375,
-0.010498046875,
0.0174713134765625,
0.00916290283203125,
-0.01666259765625,
-0.01187896728515625,
0.00910186767578125,
-0.02587890625,
0.03228759765625,
0.0280303955078125,
-0.08087158203125,
-0.0127410888671875,
-0.048431396484375,
-0.011... |
autoevaluate/autoeval-eval-xsum-default-7e15d4-93465145851 | 2023-10-06T21:11:34.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:48:28 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. | 796 | [
[
-0.037353515625,
-0.0085296630859375,
0.01531982421875,
0.0071258544921875,
-0.0156707763671875,
-0.010650634765625,
0.006927490234375,
-0.028045654296875,
0.032318115234375,
0.0267333984375,
-0.08123779296875,
-0.01288604736328125,
-0.045867919921875,
-0.01... |
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-9ea0d3-93467145852 | 2023-10-06T21:24:22.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:48:33 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. | 821 | [
[
-0.034698486328125,
-0.019775390625,
0.01111602783203125,
0.0130462646484375,
-0.01849365234375,
-0.00933074951171875,
0.004970550537109375,
-0.0281982421875,
0.0241546630859375,
0.023193359375,
-0.07440185546875,
-0.0193634033203125,
-0.052886962890625,
-0.... |
Hack90/ncbi_genbank_part_31 | 2023-10-06T18:01:43.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T17:49:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 29120787399
num_examples: 2140631
download_size: 12705990582
dataset_size: 29120787399
---
# Dataset Card for "ncbi_genbank_part_31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.050811767578125,
-0.00899505615234375,
0.0179595947265625,
0.0178985595703125,
-0.0235748291015625,
0.021820068359375,
0.0343017578125,
-0.006877899169921875,
0.05120849609375,
0.033538818359375,
-0.051422119140625,
-0.0653076171875,
-0.030242919921875,
-... |
Poolq/we | 2023-10-06T17:52:41.000Z | [
"region:us"
] | Poolq | null | null | 0 | 0 | 2023-10-06T17:52:41 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145853 | 2023-10-06T17:59:39.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:59:36 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145854 | 2023-10-06T17:59:43.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:59:40 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145855 | 2023-10-06T17:59:49.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:59:45 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145856 | 2023-10-06T17:59:55.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:59:51 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145857 | 2023-10-06T18:00:02.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-06T17:59:58 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Hack90/ncbi_genbank_part_42 | 2023-10-06T18:26:01.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T18:12:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 30230002901
num_examples: 184506
download_size: 13696357931
dataset_size: 30230002901
---
# Dataset Card for "ncbi_genbank_part_42"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.05322265625,
-0.00716400146484375,
0.017822265625,
0.00917816162109375,
-0.03411865234375,
0.0178680419921875,
0.037322998046875,
-0.003993988037109375,
0.047515869140625,
0.04083251953125,
-0.053497314453125,
-0.06695556640625,
-0.0286865234375,
-0.00069... |
Hack90/ncbi_genbank_part_43 | 2023-10-06T18:28:55.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T18:27:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 1748784064
num_examples: 250557
download_size: 733720481
dataset_size: 1748784064
---
# Dataset Card for "ncbi_genbank_part_43"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 640 | [
[
-0.05279541015625,
-0.01047515869140625,
0.016876220703125,
0.01263427734375,
-0.03631591796875,
0.0142822265625,
0.029571533203125,
-0.01155853271484375,
0.050872802734375,
0.031402587890625,
-0.0634765625,
-0.07244873046875,
-0.0266571044921875,
0.00127601... |
nathanlu/posthog | 2023-10-06T19:03:24.000Z | [
"region:us"
] | nathanlu | null | null | 0 | 0 | 2023-10-06T19:03:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tr416/dataset_20231006_192150 | 2023-10-06T19:21:52.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:21:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73785
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.043548583984375,
-0.00841522216796875,
0.01541900634765625,
0.03216552734375,
-0.01047515869140625,
-0.00811004638671875,
0.034088134765625,
-0.01219940185546875,
0.06011962890625,
0.041412353515625,
-0.055816650390625,
-0.037139892578125,
-0.041778564453125,... |
tr416/dataset_20231006_192401 | 2023-10-06T19:24:02.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:24:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73925
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192401"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.043426513671875,
-0.01092529296875,
0.0179901123046875,
0.0338134765625,
-0.0100860595703125,
-0.0134429931640625,
0.03594970703125,
-0.01038360595703125,
0.057952880859375,
0.04241943359375,
-0.06146240234375,
-0.036163330078125,
-0.040130615234375,
-0.0... |
tr416/dataset_20231006_192902 | 2023-10-06T19:29:03.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:29:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73832
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192902"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0391845703125,
-0.01171112060546875,
0.017913818359375,
0.03369140625,
-0.0129241943359375,
-0.00965118408203125,
0.03338623046875,
-0.0082855224609375,
0.0570068359375,
0.03790283203125,
-0.052581787109375,
-0.036163330078125,
-0.04388427734375,
-0.01283... |
tr416/dataset_20231006_193224 | 2023-10-06T19:32:26.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:32:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73841
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193224"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04461669921875,
-0.00849151611328125,
0.0182952880859375,
0.035980224609375,
-0.0091094970703125,
-0.00905609130859375,
0.0333251953125,
-0.00798797607421875,
0.054931640625,
0.04052734375,
-0.060302734375,
-0.039215087890625,
-0.04144287109375,
-0.012336... |
Hack90/ncbi_genbank_part_44 | 2023-10-06T19:39:07.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T19:34:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 15146612055
num_examples: 19935938
download_size: 5700205740
dataset_size: 15146612055
---
# Dataset Card for "ncbi_genbank_part_44"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.04638671875,
-0.00968170166015625,
0.0161590576171875,
0.01465606689453125,
-0.03375244140625,
0.01556396484375,
0.031341552734375,
-0.004669189453125,
0.048736572265625,
0.038848876953125,
-0.058868408203125,
-0.0699462890625,
-0.0302734375,
0.0024204254... |
Hack90/ncbi_genbank_part_32 | 2023-10-06T19:48:32.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T19:36:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 30511651329
num_examples: 2195743
download_size: 13612451510
dataset_size: 30511651329
---
# Dataset Card for "ncbi_genbank_part_32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.048828125,
-0.0094757080078125,
0.0210418701171875,
0.018035888671875,
-0.0246734619140625,
0.0229034423828125,
0.034942626953125,
-0.007251739501953125,
0.04852294921875,
0.031829833984375,
-0.049163818359375,
-0.06427001953125,
-0.031463623046875,
-0.00... |
tr416/dataset_20231006_193820 | 2023-10-06T19:38:21.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:38:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74219
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193820"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.045257568359375,
-0.0109405517578125,
0.0195465087890625,
0.02984619140625,
-0.013275146484375,
-0.01047515869140625,
0.033294677734375,
-0.0120849609375,
0.05694580078125,
0.042877197265625,
-0.05560302734375,
-0.038665771484375,
-0.04547119140625,
-0.01... |
tr416/dataset_20231006_193945 | 2023-10-06T19:39:46.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:39:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73904
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193945"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.045745849609375,
-0.01038360595703125,
0.017608642578125,
0.0302886962890625,
-0.013916015625,
-0.0097808837890625,
0.031982421875,
-0.0118865966796875,
0.056304931640625,
0.0408935546875,
-0.057708740234375,
-0.0386962890625,
-0.044525146484375,
-0.01023... |
tr416/dataset_20231006_194015 | 2023-10-06T19:40:17.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:40:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73778
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0460205078125,
-0.01239776611328125,
0.0181732177734375,
0.031982421875,
-0.01158905029296875,
-0.0108489990234375,
0.030914306640625,
-0.009857177734375,
0.05731201171875,
0.0408935546875,
-0.057342529296875,
-0.03607177734375,
-0.0440673828125,
-0.01106... |
tr416/dataset_20231006_194418 | 2023-10-06T19:44:19.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:44:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73758
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194418"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.045196533203125,
-0.01166534423828125,
0.0189666748046875,
0.0290069580078125,
-0.0133514404296875,
-0.012969970703125,
0.031005859375,
-0.0118560791015625,
0.058685302734375,
0.043792724609375,
-0.06475830078125,
-0.038116455078125,
-0.040130615234375,
-... |
tr416/dataset_20231006_194657 | 2023-10-06T19:46:58.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T19:46:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74091
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194657"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0438232421875,
-0.00910186767578125,
0.0161895751953125,
0.0284423828125,
-0.017822265625,
-0.0146942138671875,
0.031219482421875,
-0.006595611572265625,
0.055816650390625,
0.039703369140625,
-0.059600830078125,
-0.041351318359375,
-0.04302978515625,
-0.0... |
Intuit-GenSRF/hate-speech18-es | 2023-10-06T19:57:11.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 0 | 2023-10-06T19:57:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: user_id
dtype: int64
- name: subforum_id
dtype: int64
- name: num_contexts
dtype: int64
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 3529082
num_examples: 10944
download_size: 2257063
dataset_size: 3529082
---
# Dataset Card for "hate_speech18-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 679 | [
[
-0.04534912109375,
-0.0298614501953125,
0.0008449554443359375,
0.01041412353515625,
-0.003086090087890625,
0.007160186767578125,
-0.002315521240234375,
-0.0208740234375,
0.06134033203125,
0.028167724609375,
-0.060333251953125,
-0.05377197265625,
-0.0579223632812... |
BangumiBase/theidolmsteranime | 2023-10-06T22:54:36.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 1 | 0 | 2023-10-06T19:59:04 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of The Idolm@ster [anime]
This is the image base of bangumi THE IDOLM@STER [Anime], we detected 45 characters, 7376 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 570 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 710 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 456 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 75 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 99 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 54 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 86 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 19 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 28 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 39 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 357 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 76 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 296 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 327 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 280 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 58 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 403 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 16 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 350 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 788 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 43 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 26 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 26 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 363 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 8 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 30 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 7 | [Download](29/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 30 | 37 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 53 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 25 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 20 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 48 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 11 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 23 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 8 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 160 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 439 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 27 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 7 | [Download](41/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 42 | 5 | [Download](42/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 43 | 254 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 601 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 15,355 | [
[
-0.04351806640625,
-0.00962066650390625,
0.0070953369140625,
0.0129547119140625,
-0.0171356201171875,
-0.004940032958984375,
-0.00196075439453125,
-0.0214996337890625,
0.040008544921875,
0.032562255859375,
-0.057037353515625,
-0.053131103515625,
-0.0422058105468... |
towhid/guanaco-llama2-1k | 2023-10-06T20:02:23.000Z | [
"region:us"
] | towhid | null | null | 0 | 0 | 2023-10-06T20:02:22 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0220184326171875,
-0.0128173828125,
0.01739501953125,
0.037689208984375,
-0.03839111328125,
0.000885009765625,
0.0258941650390625,
-0.0190277099609375,
0.0645751953125,
0.0298919677734375,
-0.054718017578125,
-0.06707763671875,
-0.05029296875,
-0.01603698... |
Hack90/ncbi_genbank_part_45 | 2023-10-06T20:08:01.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T20:03:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 14300153721
num_examples: 25055512
download_size: 5495669961
dataset_size: 14300153721
---
# Dataset Card for "ncbi_genbank_part_45"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.051177978515625,
-0.010894775390625,
0.01448822021484375,
0.0192108154296875,
-0.030181884765625,
0.0198822021484375,
0.029022216796875,
-0.005825042724609375,
0.050445556640625,
0.0328369140625,
-0.060638427734375,
-0.06982421875,
-0.0280609130859375,
0.... |
Hack90/ncbi_genbank_part_33 | 2023-10-06T20:17:12.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T20:06:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 26425956795
num_examples: 6163404
download_size: 11468522245
dataset_size: 26425956795
---
# Dataset Card for "ncbi_genbank_part_33"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.049468994140625,
-0.01224517822265625,
0.0229339599609375,
0.01532745361328125,
-0.0232696533203125,
0.0244293212890625,
0.03387451171875,
-0.00591278076171875,
0.0455322265625,
0.03399658203125,
-0.053192138671875,
-0.065673828125,
-0.02685546875,
-0.002... |
tr416/dataset_20231006_200650 | 2023-10-06T20:06:51.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:06:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73595
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_200650"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04290771484375,
-0.00836181640625,
0.01532745361328125,
0.0289154052734375,
-0.00832366943359375,
-0.018524169921875,
0.0341796875,
-0.0168304443359375,
0.05517578125,
0.0400390625,
-0.06414794921875,
-0.038818359375,
-0.032806396484375,
-0.00420761108398... |
tr416/dataset_20231006_200728 | 2023-10-06T20:07:29.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:07:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74080
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_200728"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04180908203125,
-0.00322723388671875,
0.0175628662109375,
0.031890869140625,
-0.01097869873046875,
-0.0173187255859375,
0.037841796875,
-0.0134735107421875,
0.049407958984375,
0.04388427734375,
-0.058380126953125,
-0.04107666015625,
-0.0357666015625,
-0.0... |
BangumiBase/blackclover | 2023-10-07T18:02:59.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | 0 | 0 | 2023-10-06T20:09:10 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Black Clover
This is the image base of bangumi Black Clover, we detected 187 characters, 19538 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 367 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 332 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 168 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 155 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 401 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 198 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 27 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 109 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 55 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 55 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 168 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 2568 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 112 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 113 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 179 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 407 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 124 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 102 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 58 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 44 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 275 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 69 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 73 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 190 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 76 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 18 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 223 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 30 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 162 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 192 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 303 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 39 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 193 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 45 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 174 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 988 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 113 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 25 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 107 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 150 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 104 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 45 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 85 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 60 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 33 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 43 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 34 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 24 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 128 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 24 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 92 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 22 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 52 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 33 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 52 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 36 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 93 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 38 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 58 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 599 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 47 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 53 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 69 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 140 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 52 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 29 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 73 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 31 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 134 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 28 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 36 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 81 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 56 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 25 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 113 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 24 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 45 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 60 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 42 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 24 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 26 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 25 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 28 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 84 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 18 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 38 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 46 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 34 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 37 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 112 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 68 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 51 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 26 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 43 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 136 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 62 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 475 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 14 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 31 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 23 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 55 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 246 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 65 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 48 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 12 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 40 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 67 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 15 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 28 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 48 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 20 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 34 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 36 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 590 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 23 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 64 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 76 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 63 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 15 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 36 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 120 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 53 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 36 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 16 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 14 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 19 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 80 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 26 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 18 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 145 | [Download](129/dataset.zip) |  |  |  |  |  |  |  |  |
| 130 | 17 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| 131 | 386 | [Download](131/dataset.zip) |  |  |  |  |  |  |  |  |
| 132 | 31 | [Download](132/dataset.zip) |  |  |  |  |  |  |  |  |
| 133 | 20 | [Download](133/dataset.zip) |  |  |  |  |  |  |  |  |
| 134 | 25 | [Download](134/dataset.zip) |  |  |  |  |  |  |  |  |
| 135 | 13 | [Download](135/dataset.zip) |  |  |  |  |  |  |  |  |
| 136 | 21 | [Download](136/dataset.zip) |  |  |  |  |  |  |  |  |
| 137 | 45 | [Download](137/dataset.zip) |  |  |  |  |  |  |  |  |
| 138 | 154 | [Download](138/dataset.zip) |  |  |  |  |  |  |  |  |
| 139 | 12 | [Download](139/dataset.zip) |  |  |  |  |  |  |  |  |
| 140 | 177 | [Download](140/dataset.zip) |  |  |  |  |  |  |  |  |
| 141 | 36 | [Download](141/dataset.zip) |  |  |  |  |  |  |  |  |
| 142 | 25 | [Download](142/dataset.zip) |  |  |  |  |  |  |  |  |
| 143 | 93 | [Download](143/dataset.zip) |  |  |  |  |  |  |  |  |
| 144 | 35 | [Download](144/dataset.zip) |  |  |  |  |  |  |  |  |
| 145 | 24 | [Download](145/dataset.zip) |  |  |  |  |  |  |  |  |
| 146 | 187 | [Download](146/dataset.zip) |  |  |  |  |  |  |  |  |
| 147 | 32 | [Download](147/dataset.zip) |  |  |  |  |  |  |  |  |
| 148 | 30 | [Download](148/dataset.zip) |  |  |  |  |  |  |  |  |
| 149 | 11 | [Download](149/dataset.zip) |  |  |  |  |  |  |  |  |
| 150 | 13 | [Download](150/dataset.zip) |  |  |  |  |  |  |  |  |
| 151 | 13 | [Download](151/dataset.zip) |  |  |  |  |  |  |  |  |
| 152 | 14 | [Download](152/dataset.zip) |  |  |  |  |  |  |  |  |
| 153 | 24 | [Download](153/dataset.zip) |  |  |  |  |  |  |  |  |
| 154 | 20 | [Download](154/dataset.zip) |  |  |  |  |  |  |  |  |
| 155 | 1323 | [Download](155/dataset.zip) |  |  |  |  |  |  |  |  |
| 156 | 18 | [Download](156/dataset.zip) |  |  |  |  |  |  |  |  |
| 157 | 46 | [Download](157/dataset.zip) |  |  |  |  |  |  |  |  |
| 158 | 30 | [Download](158/dataset.zip) |  |  |  |  |  |  |  |  |
| 159 | 26 | [Download](159/dataset.zip) |  |  |  |  |  |  |  |  |
| 160 | 79 | [Download](160/dataset.zip) |  |  |  |  |  |  |  |  |
| 161 | 17 | [Download](161/dataset.zip) |  |  |  |  |  |  |  |  |
| 162 | 21 | [Download](162/dataset.zip) |  |  |  |  |  |  |  |  |
| 163 | 56 | [Download](163/dataset.zip) |  |  |  |  |  |  |  |  |
| 164 | 22 | [Download](164/dataset.zip) |  |  |  |  |  |  |  |  |
| 165 | 38 | [Download](165/dataset.zip) |  |  |  |  |  |  |  |  |
| 166 | 38 | [Download](166/dataset.zip) |  |  |  |  |  |  |  |  |
| 167 | 14 | [Download](167/dataset.zip) |  |  |  |  |  |  |  |  |
| 168 | 26 | [Download](168/dataset.zip) |  |  |  |  |  |  |  |  |
| 169 | 9 | [Download](169/dataset.zip) |  |  |  |  |  |  |  |  |
| 170 | 11 | [Download](170/dataset.zip) |  |  |  |  |  |  |  |  |
| 171 | 19 | [Download](171/dataset.zip) |  |  |  |  |  |  |  |  |
| 172 | 15 | [Download](172/dataset.zip) |  |  |  |  |  |  |  |  |
| 173 | 14 | [Download](173/dataset.zip) |  |  |  |  |  |  |  |  |
| 174 | 61 | [Download](174/dataset.zip) |  |  |  |  |  |  |  |  |
| 175 | 46 | [Download](175/dataset.zip) |  |  |  |  |  |  |  |  |
| 176 | 12 | [Download](176/dataset.zip) |  |  |  |  |  |  |  |  |
| 177 | 10 | [Download](177/dataset.zip) |  |  |  |  |  |  |  |  |
| 178 | 16 | [Download](178/dataset.zip) |  |  |  |  |  |  |  |  |
| 179 | 44 | [Download](179/dataset.zip) |  |  |  |  |  |  |  |  |
| 180 | 14 | [Download](180/dataset.zip) |  |  |  |  |  |  |  |  |
| 181 | 52 | [Download](181/dataset.zip) |  |  |  |  |  |  |  |  |
| 182 | 8 | [Download](182/dataset.zip) |  |  |  |  |  |  |  |  |
| 183 | 20 | [Download](183/dataset.zip) |  |  |  |  |  |  |  |  |
| 184 | 6 | [Download](184/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 185 | 6 | [Download](185/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 474 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| 61,628 | [
[
-0.039398193359375,
-0.00962066650390625,
0.012664794921875,
0.0163421630859375,
-0.015899658203125,
-0.004608154296875,
0.0023784637451171875,
-0.023834228515625,
0.044036865234375,
0.03216552734375,
-0.05804443359375,
-0.054412841796875,
-0.042144775390625,
... |
tr416/dataset_20231006_201232 | 2023-10-06T20:12:33.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:12:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73794
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_201232"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.046630859375,
-0.005596160888671875,
0.0199432373046875,
0.0328369140625,
-0.00992584228515625,
-0.01474761962890625,
0.03521728515625,
-0.01490020751953125,
0.053314208984375,
0.041839599609375,
-0.06317138671875,
-0.04168701171875,
-0.03997802734375,
-0... |
tr416/dataset_20231006_201304 | 2023-10-06T20:13:06.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:13:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73952
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_201304"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04034423828125,
-0.009521484375,
0.0200042724609375,
0.0316162109375,
-0.01398468017578125,
-0.01436614990234375,
0.035369873046875,
-0.01216888427734375,
0.060516357421875,
0.040069580078125,
-0.064208984375,
-0.03814697265625,
-0.03594970703125,
-0.0037... |
tr416/dataset_20231006_202109 | 2023-10-06T20:21:10.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:21:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73882
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202109"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0408935546875,
-0.00907135009765625,
0.0173492431640625,
0.0310821533203125,
-0.00997161865234375,
-0.006153106689453125,
0.0361328125,
-0.01251220703125,
0.05938720703125,
0.0430908203125,
-0.0626220703125,
-0.039306640625,
-0.040008544921875,
-0.0108642... |
tr416/dataset_20231006_202208 | 2023-10-06T20:22:09.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:22:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74223
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202208"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.042236328125,
-0.006732940673828125,
0.01812744140625,
0.0283355712890625,
-0.014129638671875,
-0.0079193115234375,
0.03594970703125,
-0.0154266357421875,
0.059112548828125,
0.044097900390625,
-0.060333251953125,
-0.040374755859375,
-0.042266845703125,
-0... |
tr416/dataset_20231006_202418 | 2023-10-06T20:24:19.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:24:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73698
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202418"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04229736328125,
-0.005237579345703125,
0.0170440673828125,
0.029632568359375,
-0.015625,
-0.007358551025390625,
0.034698486328125,
-0.0132598876953125,
0.05914306640625,
0.044464111328125,
-0.0604248046875,
-0.039642333984375,
-0.0413818359375,
-0.0084838... |
sunlab/PySecDB | 2023-10-06T21:12:06.000Z | [
"task_categories:feature-extraction",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"code",
"commit",
"patch",
"region:us"
] | sunlab | null | null | 0 | 0 | 2023-10-06T20:24:30 | ---
license: apache-2.0
task_categories:
- feature-extraction
- text-classification
- summarization
- text-generation
tags:
- code
- commit
- patch
language:
- en
pretty_name: PySecDB
size_categories:
- 1K<n<10K
---
# PySecDB: security commit dataset in Python
## Description
To foster large-scale research on vulnerability mitigation and to enable a comparison of different detection approaches, we make our dataset ***PySecDB*** from our ICSME23 paper publicly available.
PySecDB is a real-world Python security commit dataset that contains around 1.2K security commits and 2.8K non-security commits. You can find more details on the dataset in the paper *"[Exploring Security Commits in Python](https://csis.gmu.edu/ksun/)"*.
## Data Structure
PySecDB is stored in `json` format, where each sample contains 5 keys and has the following format;
```json
{
"id": the id for each entry,
"label": the type of patch, value:"security" or "non-security",
"content": the content of patch, type: str,
"source": the source of patch, value: "MITRE" or "wild",
"CVE-ID": the CVE ID if it exists, value: "CVE-XXXX-XXXXX" or "NA",
}
```
## Disclaimer & Download Agreement<span id="jump"></span>
To download the PySecDB dataset, you must agree with the succeeding Disclaimer & Download Agreement items. You should carefully read the following terms before submitting the PySecDB request form.
- PySecDB is constructed and cross-checked by 3 experts that work in security patch research.
Due to the potential misclassification led by subjective factors, the Sun Security Laboratory (SunLab) cannot guarantee 100% accuracy for samples in the dataset.
- The copyright of the PySecDB dataset is owned by SunLab.
- The purpose of using PySecDB should be non-commercial research and/or personal use. The dataset should not be used for commercial use or any profitable purpose.
- The PySecDB dataset should not be re-sell or redistributed. Anyone who has obtained PySecDB should not share the dataset with others without permission from SunLab.
## Team
The PySecDB dataset is built by [Sun Security Laboratory](https://sunlab-gmu.github.io/) (SunLab) at [George Mason University](https://www2.gmu.edu/), Fairfax, VA.
## Citations
**If you are using PySecDB for work that will result in a publication (thesis, dissertation, paper, article), please use the following citation:**
```bibtex
@article{sun2023exploring,
title={Exploring Security Commits in Python},
author={Sun, Shiyu and Wang, Shu and Wang, Xinda and Xing, Yunlong and Zhang, Elisa and Sun, Kun},
journal={arXiv preprint arXiv:2307.11853},
year={2023}
}
```
| 2,640 | [
[
-0.0169219970703125,
-0.050567626953125,
0.00888824462890625,
0.033599853515625,
-0.004596710205078125,
-0.007335662841796875,
0.00025153160095214844,
-0.0244903564453125,
0.0212249755859375,
0.0160980224609375,
-0.0206298828125,
-0.047515869140625,
-0.036621093... |
Hack90/ncbi_genbank_part_34 | 2023-10-06T20:44:06.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T20:30:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33614209063
num_examples: 62458
download_size: 15102741772
dataset_size: 33614209063
---
# Dataset Card for "ncbi_genbank_part_34"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 643 | [
[
-0.046234130859375,
-0.007350921630859375,
0.021240234375,
0.018096923828125,
-0.0248260498046875,
0.0234527587890625,
0.033721923828125,
-0.00276947021484375,
0.0438232421875,
0.03143310546875,
-0.05511474609375,
-0.06378173828125,
-0.026275634765625,
-0.00... |
tr416/dataset_20231006_203135 | 2023-10-06T20:31:36.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:31:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73705
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_203135"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0443115234375,
-0.00833892822265625,
0.016357421875,
0.031158447265625,
-0.01389312744140625,
-0.0104827880859375,
0.03271484375,
-0.01399993896484375,
0.05816650390625,
0.04150390625,
-0.0638427734375,
-0.042236328125,
-0.04193115234375,
-0.0073738098144... |
tr416/dataset_20231006_203612 | 2023-10-06T20:36:14.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T20:36:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73868
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_203612"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.043212890625,
-0.007049560546875,
0.015838623046875,
0.02960205078125,
-0.012939453125,
-0.00910186767578125,
0.033843994140625,
-0.0149993896484375,
0.05853271484375,
0.03961181640625,
-0.06268310546875,
-0.03826904296875,
-0.04193115234375,
-0.010467529... |
towhid/aesir-test2 | 2023-10-07T16:46:04.000Z | [
"region:us"
] | towhid | null | null | 0 | 0 | 2023-10-06T20:38:54 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 68
num_examples: 17
download_size: 707
dataset_size: 68
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "aesir-test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 420 | [
[
-0.04425048828125,
-0.013885498046875,
-0.0005764961242675781,
0.00567626953125,
-0.01291656494140625,
-0.006591796875,
0.02777099609375,
-0.021270751953125,
0.04248046875,
0.0220947265625,
-0.0380859375,
-0.035919189453125,
-0.04461669921875,
-0.01187896728... |
Hack90/ncbi_genbank_part_35 | 2023-10-06T21:10:19.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T20:56:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31326431657
num_examples: 1222
download_size: 14219174128
dataset_size: 31326431657
---
# Dataset Card for "ncbi_genbank_part_35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.049285888671875,
-0.0102996826171875,
0.021392822265625,
0.0164337158203125,
-0.026275634765625,
0.0271759033203125,
0.03094482421875,
-0.00634002685546875,
0.042755126953125,
0.036224365234375,
-0.050567626953125,
-0.0667724609375,
-0.0240631103515625,
-... |
Hack90/ncbi_genbank_part_36 | 2023-10-06T21:37:05.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T21:23:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32019675075
num_examples: 1276
download_size: 14482789388
dataset_size: 32019675075
---
# Dataset Card for "ncbi_genbank_part_36"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.049835205078125,
-0.0087127685546875,
0.0251312255859375,
0.01343536376953125,
-0.0219268798828125,
0.0261688232421875,
0.0330810546875,
-0.0020160675048828125,
0.0478515625,
0.03387451171875,
-0.060028076171875,
-0.05841064453125,
-0.0225372314453125,
-0... |
Hack90/ncbi_genbank_part_37 | 2023-10-06T22:04:57.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T21:49:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33117444507
num_examples: 1628
download_size: 14926056837
dataset_size: 33117444507
---
# Dataset Card for "ncbi_genbank_part_37"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.05023193359375,
-0.01509857177734375,
0.018310546875,
0.0125579833984375,
-0.032745361328125,
0.0209808349609375,
0.0277862548828125,
-0.005016326904296875,
0.04791259765625,
0.035552978515625,
-0.053131103515625,
-0.0631103515625,
-0.0254364013671875,
-0... |
Hack90/ncbi_genbank_part_61 | 2023-10-06T22:45:22.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T22:29:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32323629638
num_examples: 119131
download_size: 14698797773
dataset_size: 32323629638
---
# Dataset Card for "ncbi_genbank_part_61"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.04156494140625,
-0.00885009765625,
0.01541900634765625,
0.006061553955078125,
-0.0296173095703125,
0.015594482421875,
0.035430908203125,
-0.01104736328125,
0.0543212890625,
0.0386962890625,
-0.051605224609375,
-0.06842041015625,
-0.0220794677734375,
0.000... |
Hack90/ncbi_genbank_part_71 | 2023-10-07T00:00:16.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T22:30:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
1899Deposit38-ECV/Joaosaldanha | 2023-10-06T22:43:27.000Z | [
"region:us"
] | 1899Deposit38-ECV | null | null | 0 | 0 | 2023-10-06T22:42:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Hack90/ncbi_genbank_part_62 | 2023-10-06T23:11:13.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T23:01:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 23316507081
num_examples: 4572223
download_size: 10026708270
dataset_size: 23316507081
---
# Dataset Card for "ncbi_genbank_part_62"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.040130615234375,
-0.0103912353515625,
0.018218994140625,
0.006824493408203125,
-0.0267486572265625,
0.0124664306640625,
0.037017822265625,
-0.0030918121337890625,
0.050872802734375,
0.03753662109375,
-0.048675537109375,
-0.068115234375,
-0.02886962890625,
... |
ZelaAI/test_repo | 2023-10-07T21:31:09.000Z | [
"region:us"
] | ZelaAI | null | null | 0 | 0 | 2023-10-06T23:08:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tr416/dataset_20231006_231107 | 2023-10-06T23:11:08.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:11:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73705
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.041839599609375,
-0.005218505859375,
0.0174407958984375,
0.031341552734375,
-0.01235198974609375,
-0.0093536376953125,
0.03240966796875,
-0.0100250244140625,
0.05902099609375,
0.043914794921875,
-0.057342529296875,
-0.036895751953125,
-0.0404052734375,
-0... |
tr416/dataset_20231006_231419 | 2023-10-06T23:14:20.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:14:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73847
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231419"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04254150390625,
-0.0053863525390625,
0.016815185546875,
0.0310821533203125,
-0.01335906982421875,
-0.007366180419921875,
0.03216552734375,
-0.00921630859375,
0.059417724609375,
0.044708251953125,
-0.059478759765625,
-0.03729248046875,
-0.0399169921875,
-0... |
tr416/dataset_20231006_231926 | 2023-10-06T23:19:27.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:19:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73865
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231926"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.041412353515625,
-0.006130218505859375,
0.01861572265625,
0.031494140625,
-0.01186370849609375,
-0.00746917724609375,
0.031463623046875,
-0.0079498291015625,
0.059356689453125,
0.044036865234375,
-0.059417724609375,
-0.03826904296875,
-0.04022216796875,
-... |
tr416/dataset_20231006_232347 | 2023-10-06T23:23:48.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:23:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74080
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_232347"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0419921875,
-0.00766754150390625,
0.017974853515625,
0.032501220703125,
-0.01363372802734375,
-0.00795745849609375,
0.031646728515625,
-0.007633209228515625,
0.057952880859375,
0.041900634765625,
-0.059234619140625,
-0.03619384765625,
-0.039276123046875,
... |
tr416/dataset_20231006_233006 | 2023-10-06T23:30:07.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:30:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73943
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233006"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.044342041015625,
-0.006992340087890625,
0.016754150390625,
0.029388427734375,
-0.01317596435546875,
-0.01000213623046875,
0.032867431640625,
-0.0096893310546875,
0.0555419921875,
0.0430908203125,
-0.057830810546875,
-0.0362548828125,
-0.041229248046875,
-... |
Hack90/ncbi_genbank_part_63 | 2023-10-06T23:35:56.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T23:32:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 12920615864
num_examples: 13999730
download_size: 5041172591
dataset_size: 12920615864
---
# Dataset Card for "ncbi_genbank_part_63"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.039703369140625,
-0.01163482666015625,
0.0172271728515625,
0.0028400421142578125,
-0.0266571044921875,
0.0183258056640625,
0.033599853515625,
-0.007595062255859375,
0.054473876953125,
0.040771484375,
-0.049072265625,
-0.07037353515625,
-0.030517578125,
0.... |
tr416/dataset_20231006_233701 | 2023-10-06T23:37:02.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:37:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74101
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233701"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.043914794921875,
-0.008941650390625,
0.01580810546875,
0.0293426513671875,
-0.0130462646484375,
-0.01039886474609375,
0.03271484375,
-0.007114410400390625,
0.058197021484375,
0.043731689453125,
-0.058319091796875,
-0.0369873046875,
-0.0406494140625,
-0.01... |
tr416/dataset_20231006_233908 | 2023-10-06T23:39:09.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:39:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73813
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233908"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04376220703125,
-0.006900787353515625,
0.01611328125,
0.0305023193359375,
-0.0125274658203125,
-0.00933074951171875,
0.03265380859375,
-0.00861358642578125,
0.05780029296875,
0.0439453125,
-0.05718994140625,
-0.037384033203125,
-0.04107666015625,
-0.01049... |
tr416/dataset_20231006_234030 | 2023-10-06T23:40:31.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-06T23:40:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73965
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_234030"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0455322265625,
-0.004039764404296875,
0.0154571533203125,
0.0296783447265625,
-0.01284027099609375,
-0.01053619384765625,
0.0309600830078125,
-0.004535675048828125,
0.058746337890625,
0.041778564453125,
-0.058837890625,
-0.037139892578125,
-0.039306640625,
... |
Hack90/ncbi_genbank_part_64 | 2023-10-07T00:01:49.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-06T23:52:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 23605755944
num_examples: 1596418
download_size: 10216572338
dataset_size: 23605755944
---
# Dataset Card for "ncbi_genbank_part_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.04608154296875,
-0.00934600830078125,
0.02130126953125,
0.00937652587890625,
-0.029296875,
0.01507568359375,
0.03448486328125,
-0.005641937255859375,
0.047088623046875,
0.039093017578125,
-0.046905517578125,
-0.07122802734375,
-0.03662109375,
-0.001207351... |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-afc8c5-93509145863 | 2023-10-07T00:19:46.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-07T00:04:56 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- amazon_polarity
eval_info:
task: binary_classification
model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
metrics: []
dataset_name: amazon_polarity
dataset_config: amazon_polarity
dataset_split: test
col_mapping:
text: content
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
* Dataset: amazon_polarity
* Config: amazon_polarity
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AdamCodd](https://huggingface.co/AdamCodd) for evaluating this model. | 937 | [
[
-0.034332275390625,
-0.025848388671875,
0.017242431640625,
0.0194244384765625,
-0.01409149169921875,
0.006435394287109375,
-0.0012025833129882812,
-0.026153564453125,
0.0076751708984375,
0.031585693359375,
-0.06634521484375,
-0.03436279296875,
-0.05352783203125,... |
Hack90/ncbi_genbank_part_65 | 2023-10-07T00:26:31.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-07T00:19:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 19753145244
num_examples: 1476991
download_size: 8576777094
dataset_size: 19753145244
---
# Dataset Card for "ncbi_genbank_part_65"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.037384033203125,
-0.015045166015625,
0.01288604736328125,
0.006885528564453125,
-0.02679443359375,
0.014251708984375,
0.029693603515625,
-0.00836181640625,
0.04736328125,
0.031890869140625,
-0.04791259765625,
-0.0723876953125,
-0.02471923828125,
-0.000463... |
teleprint-me/function-calling | 2023-10-07T02:06:25.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | teleprint-me | null | null | 0 | 0 | 2023-10-07T00:27:36 | ---
license: cc-by-nc-sa-4.0
---
# Function Calling Dataset
## Description
The Function Calling Dataset is designed to facilitate the training of natural language models to understand and generate structured function calls within a conversational context. This dataset contains a collection of interactions where users engage in conversations with a virtual assistant and request specific actions or information using structured function calls.
## Dataset Contents
- Regular Conversations: The dataset includes standard conversational cues where users ask questions, make requests, or seek information.
- Structured Function Calls: Using the `[FUNC]` tag, the dataset provides examples of how to generate structured outputs, representing function calls. These function calls encompass various tasks and actions.
- Marrying Structure and Natural Language: Following each function call, there are natural language explanations or elaborations. This ensures that the dataset strikes a balance between structured and human-friendly responses.
## Usage
This dataset is intended for training and fine-tuning natural language models, enabling them to understand and generate structured function calls in response to user queries. It can be valuable for projects involving virtual assistants, chatbots, or any application where structured actions are required within a conversational context.
## License
This dataset is made available under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
---
| 1,606 | [
[
-0.0178985595703125,
-0.07025146484375,
0.0021266937255859375,
0.02899169921875,
-0.00870513916015625,
-0.009765625,
-0.0006461143493652344,
-0.0273284912109375,
0.01136016845703125,
0.0606689453125,
-0.06671142578125,
-0.0506591796875,
0.0085601806640625,
0... |
Hack90/ncbi_genbank_part_66 | 2023-10-07T00:47:28.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 0 | 2023-10-07T00:44:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
artdwn/stable-diffusion-backup | 2023-10-07T01:49:58.000Z | [
"arxiv:2211.06679",
"region:us"
] | artdwn | null | null | 0 | 0 | 2023-10-07T00:53:22 | # Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
| 11,683 | [
[
-0.054534912109375,
-0.06951904296875,
0.03265380859375,
0.03778076171875,
0.0016508102416992188,
-0.005138397216796875,
0.01467132568359375,
-0.0333251953125,
0.018646240234375,
0.00787353515625,
-0.04449462890625,
-0.0256500244140625,
-0.0270233154296875,
... |
jjisd/pandanew | 2023-10-07T01:42:59.000Z | [
"region:us"
] | jjisd | null | null | 0 | 0 | 2023-10-07T01:40:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
CHDCruze/ep1techwalkers | 2023-10-07T01:41:46.000Z | [
"region:us"
] | CHDCruze | null | null | 0 | 0 | 2023-10-07T01:40:22 | - Arc Browser Logo
- Graphic representation of AI with GPT-3.5 and Anthropic’s Logos
- Visual of a Browser Tab
- Download Icon with a document being renamed
- Hover Action Icon
- ChatGPT Dialogue Box
- Prototyping or Draft Sketch
- Boosts Icon (upward arrow or power symbol)
- Picture of CEO Josh Miller (from the livestream if possible)
- Feedback/Comments Icon | 362 | [
[
-0.044189453125,
-0.02459716796875,
0.00928497314453125,
0.0038299560546875,
0.00868988037109375,
0.05426025390625,
0.016357421875,
-0.05047607421875,
0.06427001953125,
0.0142669677734375,
-0.061187744140625,
-0.000759124755859375,
-0.029937744140625,
0.0118... |
artdwn/sd | 2023-10-11T12:25:30.000Z | [
"region:us"
] | artdwn | null | null | 0 | 0 | 2023-10-07T02:08:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tr416/dataset_20231007_023707 | 2023-10-07T02:37:07.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:37:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.057220458984375,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.00507354736328125,
0.0513916015625,
0.0169830322265625,
-0.052032470703125,
-0.014984130859375,
-0.060455322265625,
0.037... |
tr416/dataset_20231007_024059 | 2023-10-07T02:41:00.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:40:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73943
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024059"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036773681640625,
-0.0084686279296875,
0.01611328125,
0.029998779296875,
-0.0168914794921875,
-0.01256561279296875,
0.035247802734375,
-0.01116943359375,
0.0589599609375,
0.042877197265625,
-0.056976318359375,
-0.03607177734375,
-0.040771484375,
-0.0085067... |
tr416/dataset_20231007_024249 | 2023-10-07T02:42:50.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:42:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73878
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024249"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.03790283203125,
-0.007541656494140625,
0.0167388916015625,
0.03216552734375,
-0.0167083740234375,
-0.0126495361328125,
0.036712646484375,
-0.00823974609375,
0.0577392578125,
0.042266845703125,
-0.059844970703125,
-0.035491943359375,
-0.0404052734375,
-0.0... |
tr416/dataset_20231007_024410 | 2023-10-07T02:44:11.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:44:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73926
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024410"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.038055419921875,
-0.006458282470703125,
0.0156097412109375,
0.031707763671875,
-0.0152587890625,
-0.0120086669921875,
0.035308837890625,
-0.01010894775390625,
0.058441162109375,
0.043304443359375,
-0.059417724609375,
-0.035064697265625,
-0.04046630859375,
... |
tr416/dataset_20231007_024626 | 2023-10-07T02:46:28.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:46:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73818
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024626"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.037139892578125,
-0.00762939453125,
0.017364501953125,
0.032012939453125,
-0.0153961181640625,
-0.01206207275390625,
0.0350341796875,
-0.0080718994140625,
0.05712890625,
0.04144287109375,
-0.059539794921875,
-0.037322998046875,
-0.040863037109375,
-0.0103... |
tr416/dataset_20231007_024652 | 2023-10-07T02:46:53.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:46:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74082
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024652"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036376953125,
-0.00638580322265625,
0.0154876708984375,
0.033447265625,
-0.01541900634765625,
-0.0133819580078125,
0.036407470703125,
-0.009033203125,
0.058563232421875,
0.04083251953125,
-0.05926513671875,
-0.03656005859375,
-0.040283203125,
-0.009704589... |
tr416/dataset_20231007_024754 | 2023-10-07T02:47:55.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:47:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73962
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024754"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0347900390625,
-0.0069427490234375,
0.016326904296875,
0.031219482421875,
-0.01554107666015625,
-0.01309967041015625,
0.03631591796875,
-0.00879669189453125,
0.0570068359375,
0.041900634765625,
-0.05853271484375,
-0.035064697265625,
-0.039459228515625,
-0... |
tr416/dataset_20231007_025212 | 2023-10-07T02:52:14.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:52:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73961
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025212"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036041259765625,
-0.007472991943359375,
0.01666259765625,
0.033050537109375,
-0.01480865478515625,
-0.01220703125,
0.037139892578125,
-0.01006317138671875,
0.060028076171875,
0.039825439453125,
-0.05926513671875,
-0.03546142578125,
-0.040435791015625,
-0.... |
tr416/dataset_20231007_025331 | 2023-10-07T02:53:32.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:53:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73708
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025331"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036285400390625,
-0.006786346435546875,
0.0179595947265625,
0.032928466796875,
-0.01605224609375,
-0.01050567626953125,
0.035919189453125,
-0.0090179443359375,
0.058197021484375,
0.040435791015625,
-0.05950927734375,
-0.03656005859375,
-0.0401611328125,
-... |
tr416/dataset_20231007_025421 | 2023-10-07T02:54:22.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:54:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73904
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025421"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036102294921875,
-0.00878143310546875,
0.017578125,
0.032073974609375,
-0.01427459716796875,
-0.0125885009765625,
0.036102294921875,
-0.00983428955078125,
0.05718994140625,
0.043060302734375,
-0.059051513671875,
-0.034576416015625,
-0.039764404296875,
-0.... |
tr416/dataset_20231007_025600 | 2023-10-07T02:56:02.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:56:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74033
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025600"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.036865234375,
-0.0103759765625,
0.0157470703125,
0.031158447265625,
-0.01534271240234375,
-0.01154327392578125,
0.03594970703125,
-0.0095977783203125,
0.057891845703125,
0.040924072265625,
-0.057830810546875,
-0.03521728515625,
-0.041046142578125,
-0.0096... |
tr416/dataset_20231007_025659 | 2023-10-07T02:57:00.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:56:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73961
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025659"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.03570556640625,
-0.009735107421875,
0.01593017578125,
0.030029296875,
-0.0157470703125,
-0.01203155517578125,
0.035491943359375,
-0.01043701171875,
0.059295654296875,
0.042144775390625,
-0.05682373046875,
-0.035186767578125,
-0.040863037109375,
-0.0080871... |
tr416/dataset_20231007_025908 | 2023-10-07T02:59:09.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T02:59:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74279
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025908"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.03515625,
-0.00759124755859375,
0.0174102783203125,
0.030792236328125,
-0.016937255859375,
-0.010833740234375,
0.036346435546875,
-0.009979248046875,
0.05999755859375,
0.043670654296875,
-0.055908203125,
-0.0343017578125,
-0.040283203125,
-0.0067100524902... |
tr416/dataset_20231007_030433 | 2023-10-07T03:04:35.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T03:04:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73993
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030433"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.0396728515625,
-0.006923675537109375,
0.020782470703125,
0.0340576171875,
-0.0167083740234375,
-0.0156402587890625,
0.03717041015625,
-0.00972747802734375,
0.0567626953125,
0.043975830078125,
-0.059234619140625,
-0.0384521484375,
-0.040069580078125,
-0.00... |
tr416/dataset_20231007_030710 | 2023-10-07T03:07:12.000Z | [
"region:us"
] | tr416 | null | null | 0 | 0 | 2023-10-07T03:07:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73696
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030710"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.03912353515625,
-0.006744384765625,
0.0194091796875,
0.034881591796875,
-0.016448974609375,
-0.0149383544921875,
0.034942626953125,
-0.010101318359375,
0.057647705078125,
0.04278564453125,
-0.056610107421875,
-0.036712646484375,
-0.04119873046875,
-0.0053... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.