id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Hack90/ncbi_genbank_part_30 | 2023-10-06T17:34:42.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32901664415
num_examples: 168885
download_size: 14692414119
dataset_size: 32901664415
---
# Dataset Card for "ncbi_genbank_part_30"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_9 | 2023-10-06T19:46:23.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 20591910291
num_examples: 12188
download_size: 4208046819
dataset_size: 20591910291
---
# Dataset Card for "ncbi_genbank_part_9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_50 | 2023-10-06T19:50:52.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 26640706946
num_examples: 3973282
download_size: 10256237433
dataset_size: 26640706946
---
# Dataset Card for "ncbi_genbank_part_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_41 | 2023-10-06T17:59:33.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31049128200
num_examples: 273326
download_size: 13996445609
dataset_size: 31049128200
---
# Dataset Card for "ncbi_genbank_part_41"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_21 | 2023-10-07T03:00:47.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 12245208393
num_examples: 15929500
download_size: 5119781029
dataset_size: 12245208393
---
# Dataset Card for "ncbi_genbank_part_21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-samsum-samsum-3cd2fc-93464145850 | 2023-10-06T18:00:28.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. |
autoevaluate/autoeval-eval-xsum-default-7e15d4-93465145851 | 2023-10-06T21:11:34.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. |
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-9ea0d3-93467145852 | 2023-10-06T21:24:22.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: google/pegasus-multi_news
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-multi_news
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. |
Hack90/ncbi_genbank_part_31 | 2023-10-06T18:01:43.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 29120787399
num_examples: 2140631
download_size: 12705990582
dataset_size: 29120787399
---
# Dataset Card for "ncbi_genbank_part_31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Poolq/we | 2023-10-06T17:52:41.000Z | [
"region:us"
] | Poolq | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145853 | 2023-10-06T17:59:39.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145854 | 2023-10-06T17:59:43.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145855 | 2023-10-06T17:59:49.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145856 | 2023-10-06T17:59:55.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b337fc-93468145857 | 2023-10-06T18:00:02.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
Hack90/ncbi_genbank_part_42 | 2023-10-06T18:26:01.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 30230002901
num_examples: 184506
download_size: 13696357931
dataset_size: 30230002901
---
# Dataset Card for "ncbi_genbank_part_42"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_22 | 2023-10-06T18:26:05.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 16393902737
num_examples: 12591387
download_size: 6879416017
dataset_size: 16393902737
---
# Dataset Card for "ncbi_genbank_part_22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/sexting-nsfw-adultconten-es | 2023-10-06T18:24:43.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 89678
num_examples: 538
download_size: 0
dataset_size: 89678
---
# Dataset Card for "sexting-nsfw-adultconten-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_43 | 2023-10-06T18:28:55.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 1748784064
num_examples: 250557
download_size: 733720481
dataset_size: 1748784064
---
# Dataset Card for "ncbi_genbank_part_43"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SouBryan/Cod_MW2019_Precision_Airstrike_Dataset | 2023-10-06T18:44:59.000Z | [
"license:mit",
"region:us"
] | SouBryan | null | null | null | 0 | 0 | ---
license: mit
---
|
Hack90/ncbi_genbank_part_23 | 2023-10-06T19:04:38.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32376257548
num_examples: 660870
download_size: 14556362694
dataset_size: 32376257548
---
# Dataset Card for "ncbi_genbank_part_23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Eu001/Aurora | 2023-10-06T20:03:50.000Z | [
"license:openrail",
"region:us"
] | Eu001 | null | null | null | 0 | 0 | ---
license: openrail
---
|
nathanlu/posthog | 2023-10-06T19:03:24.000Z | [
"region:us"
] | nathanlu | null | null | null | 0 | 0 | Entry not found |
Sharka/CIVQA_easyocr_encode_valid | 2023-10-06T19:19:19.000Z | [
"region:us"
] | Sharka | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: bbox
dtype:
array2_d:
shape:
- 512
- 4
dtype: int64
- name: attention_mask
sequence: int64
- name: image
dtype:
array3_d:
shape:
- 3
- 224
- 224
dtype: int64
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: validation
num_bytes: 21069068623
num_examples: 17079
download_size: 707118847
dataset_size: 21069068623
---
# Dataset Card for "CIVQA_easyocr_encode_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_192150 | 2023-10-06T19:21:52.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73785
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_192401 | 2023-10-06T19:24:02.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73925
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192401"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_24 | 2023-10-06T19:42:58.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33717252746
num_examples: 41233
download_size: 15120789725
dataset_size: 33717252746
---
# Dataset Card for "ncbi_genbank_part_24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_192902 | 2023-10-06T19:29:03.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73832
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_192902"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_193224 | 2023-10-06T19:32:26.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73841
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193224"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_44 | 2023-10-06T19:39:07.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 15146612055
num_examples: 19935938
download_size: 5700205740
dataset_size: 15146612055
---
# Dataset Card for "ncbi_genbank_part_44"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_32 | 2023-10-06T19:48:32.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 30511651329
num_examples: 2195743
download_size: 13612451510
dataset_size: 30511651329
---
# Dataset Card for "ncbi_genbank_part_32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/toxigen-train-annotated-es | 2023-10-06T19:38:23.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 2822955
num_examples: 8960
download_size: 1685693
dataset_size: 2822955
---
# Dataset Card for "toxigen-train-annotated-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_193820 | 2023-10-06T19:38:21.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74219
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193820"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_193945 | 2023-10-06T19:39:46.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73904
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_193945"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_194015 | 2023-10-06T19:40:17.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73778
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_194418 | 2023-10-06T19:44:19.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73758
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194418"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_194657 | 2023-10-06T19:46:58.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74091
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_194657"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/haternet-es | 2023-10-06T19:50:41.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 2211676
num_examples: 6000
download_size: 1501046
dataset_size: 2211676
---
# Dataset Card for "haternet-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/theidolmsteranime | 2023-10-06T22:54:36.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 1 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of The Idolm@ster [anime]
This is the image base of bangumi THE IDOLM@STER [Anime], we detected 45 characters, 7376 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 570 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 710 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 456 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 75 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 99 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 54 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 86 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 19 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 28 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 39 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 357 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 76 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 296 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 327 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 280 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 58 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 403 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 16 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 350 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 788 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 43 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 26 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 26 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 363 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 8 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 30 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 7 | [Download](29/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 30 | 37 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 53 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 25 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 20 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 48 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 11 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 23 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 8 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 160 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 439 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 27 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 7 | [Download](41/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 42 | 5 | [Download](42/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 43 | 254 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 601 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
towhid/guanaco-llama2-1k | 2023-10-06T20:02:23.000Z | [
"region:us"
] | towhid | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_45 | 2023-10-06T20:08:01.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 14300153721
num_examples: 25055512
download_size: 5495669961
dataset_size: 14300153721
---
# Dataset Card for "ncbi_genbank_part_45"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/tweet-eval-hate-es | 2023-10-06T20:05:33.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 3680715
num_examples: 9000
download_size: 2493022
dataset_size: 3680715
---
# Dataset Card for "tweet_eval-hate-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_33 | 2023-10-06T20:17:12.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 26425956795
num_examples: 6163404
download_size: 11468522245
dataset_size: 26425956795
---
# Dataset Card for "ncbi_genbank_part_33"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_200650 | 2023-10-06T20:06:51.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73595
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_200650"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_200728 | 2023-10-06T20:07:29.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74080
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_200728"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/blackclover | 2023-10-07T18:02:59.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Black Clover
This is the image base of bangumi Black Clover, we detected 187 characters, 19538 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 367 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 332 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 168 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 155 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 401 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 198 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 27 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 109 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 55 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 55 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 168 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 2568 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 112 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 113 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 179 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 407 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 124 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 102 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 58 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 44 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 275 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 69 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 73 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 190 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 76 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 18 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 223 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 30 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 162 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 192 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 303 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 39 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 193 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 45 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 174 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 988 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 113 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 25 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 107 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 150 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 104 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 45 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 85 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 60 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 33 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 43 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 34 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 24 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 128 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 24 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 92 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 22 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 52 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 33 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 52 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 36 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 93 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 38 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 58 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 599 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 47 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 53 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 69 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 140 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 52 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 29 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 73 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 31 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 134 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 28 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 36 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 81 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 56 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 25 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 113 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 24 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 45 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 60 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 42 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 24 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 26 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 25 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 28 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 84 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 18 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 38 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 46 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 34 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 37 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 112 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 68 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 51 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 26 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 43 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 136 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 62 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 475 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 14 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 31 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 23 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 55 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 246 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 65 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 48 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 12 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 40 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 67 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 15 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 28 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 48 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 20 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 34 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 36 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 590 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 23 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 64 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 76 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 63 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 15 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 36 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 120 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 53 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 36 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 16 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 14 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 19 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 80 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 26 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 18 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 145 | [Download](129/dataset.zip) |  |  |  |  |  |  |  |  |
| 130 | 17 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| 131 | 386 | [Download](131/dataset.zip) |  |  |  |  |  |  |  |  |
| 132 | 31 | [Download](132/dataset.zip) |  |  |  |  |  |  |  |  |
| 133 | 20 | [Download](133/dataset.zip) |  |  |  |  |  |  |  |  |
| 134 | 25 | [Download](134/dataset.zip) |  |  |  |  |  |  |  |  |
| 135 | 13 | [Download](135/dataset.zip) |  |  |  |  |  |  |  |  |
| 136 | 21 | [Download](136/dataset.zip) |  |  |  |  |  |  |  |  |
| 137 | 45 | [Download](137/dataset.zip) |  |  |  |  |  |  |  |  |
| 138 | 154 | [Download](138/dataset.zip) |  |  |  |  |  |  |  |  |
| 139 | 12 | [Download](139/dataset.zip) |  |  |  |  |  |  |  |  |
| 140 | 177 | [Download](140/dataset.zip) |  |  |  |  |  |  |  |  |
| 141 | 36 | [Download](141/dataset.zip) |  |  |  |  |  |  |  |  |
| 142 | 25 | [Download](142/dataset.zip) |  |  |  |  |  |  |  |  |
| 143 | 93 | [Download](143/dataset.zip) |  |  |  |  |  |  |  |  |
| 144 | 35 | [Download](144/dataset.zip) |  |  |  |  |  |  |  |  |
| 145 | 24 | [Download](145/dataset.zip) |  |  |  |  |  |  |  |  |
| 146 | 187 | [Download](146/dataset.zip) |  |  |  |  |  |  |  |  |
| 147 | 32 | [Download](147/dataset.zip) |  |  |  |  |  |  |  |  |
| 148 | 30 | [Download](148/dataset.zip) |  |  |  |  |  |  |  |  |
| 149 | 11 | [Download](149/dataset.zip) |  |  |  |  |  |  |  |  |
| 150 | 13 | [Download](150/dataset.zip) |  |  |  |  |  |  |  |  |
| 151 | 13 | [Download](151/dataset.zip) |  |  |  |  |  |  |  |  |
| 152 | 14 | [Download](152/dataset.zip) |  |  |  |  |  |  |  |  |
| 153 | 24 | [Download](153/dataset.zip) |  |  |  |  |  |  |  |  |
| 154 | 20 | [Download](154/dataset.zip) |  |  |  |  |  |  |  |  |
| 155 | 1323 | [Download](155/dataset.zip) |  |  |  |  |  |  |  |  |
| 156 | 18 | [Download](156/dataset.zip) |  |  |  |  |  |  |  |  |
| 157 | 46 | [Download](157/dataset.zip) |  |  |  |  |  |  |  |  |
| 158 | 30 | [Download](158/dataset.zip) |  |  |  |  |  |  |  |  |
| 159 | 26 | [Download](159/dataset.zip) |  |  |  |  |  |  |  |  |
| 160 | 79 | [Download](160/dataset.zip) |  |  |  |  |  |  |  |  |
| 161 | 17 | [Download](161/dataset.zip) |  |  |  |  |  |  |  |  |
| 162 | 21 | [Download](162/dataset.zip) |  |  |  |  |  |  |  |  |
| 163 | 56 | [Download](163/dataset.zip) |  |  |  |  |  |  |  |  |
| 164 | 22 | [Download](164/dataset.zip) |  |  |  |  |  |  |  |  |
| 165 | 38 | [Download](165/dataset.zip) |  |  |  |  |  |  |  |  |
| 166 | 38 | [Download](166/dataset.zip) |  |  |  |  |  |  |  |  |
| 167 | 14 | [Download](167/dataset.zip) |  |  |  |  |  |  |  |  |
| 168 | 26 | [Download](168/dataset.zip) |  |  |  |  |  |  |  |  |
| 169 | 9 | [Download](169/dataset.zip) |  |  |  |  |  |  |  |  |
| 170 | 11 | [Download](170/dataset.zip) |  |  |  |  |  |  |  |  |
| 171 | 19 | [Download](171/dataset.zip) |  |  |  |  |  |  |  |  |
| 172 | 15 | [Download](172/dataset.zip) |  |  |  |  |  |  |  |  |
| 173 | 14 | [Download](173/dataset.zip) |  |  |  |  |  |  |  |  |
| 174 | 61 | [Download](174/dataset.zip) |  |  |  |  |  |  |  |  |
| 175 | 46 | [Download](175/dataset.zip) |  |  |  |  |  |  |  |  |
| 176 | 12 | [Download](176/dataset.zip) |  |  |  |  |  |  |  |  |
| 177 | 10 | [Download](177/dataset.zip) |  |  |  |  |  |  |  |  |
| 178 | 16 | [Download](178/dataset.zip) |  |  |  |  |  |  |  |  |
| 179 | 44 | [Download](179/dataset.zip) |  |  |  |  |  |  |  |  |
| 180 | 14 | [Download](180/dataset.zip) |  |  |  |  |  |  |  |  |
| 181 | 52 | [Download](181/dataset.zip) |  |  |  |  |  |  |  |  |
| 182 | 8 | [Download](182/dataset.zip) |  |  |  |  |  |  |  |  |
| 183 | 20 | [Download](183/dataset.zip) |  |  |  |  |  |  |  |  |
| 184 | 6 | [Download](184/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 185 | 6 | [Download](185/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 474 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
tr416/dataset_20231006_201232 | 2023-10-06T20:12:33.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73794
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_201232"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_201304 | 2023-10-06T20:13:06.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73952
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_201304"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_10 | 2023-10-06T20:36:54.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 18860452767
num_examples: 1911681
download_size: 8308479889
dataset_size: 18860452767
---
# Dataset Card for "ncbi_genbank_part_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_202109 | 2023-10-06T20:21:10.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73882
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202109"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_202208 | 2023-10-06T20:22:09.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74223
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202208"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_202418 | 2023-10-06T20:24:19.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73698
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_202418"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sunlab/PySecDB | 2023-10-06T21:12:06.000Z | [
"task_categories:feature-extraction",
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"code",
"commit",
"patch",
"region:us"
] | sunlab | null | null | null | 0 | 0 | ---
license: apache-2.0
task_categories:
- feature-extraction
- text-classification
- summarization
- text-generation
tags:
- code
- commit
- patch
language:
- en
pretty_name: PySecDB
size_categories:
- 1K<n<10K
---
# PySecDB: security commit dataset in Python
## Description
To foster large-scale research on vulnerability mitigation and to enable a comparison of different detection approaches, we make our dataset ***PySecDB*** from our ICSME23 paper publicly available.
PySecDB is a real-world Python security commit dataset that contains around 1.2K security commits and 2.8K non-security commits. You can find more details on the dataset in the paper *"[Exploring Security Commits in Python](https://csis.gmu.edu/ksun/)"*.
## Data Structure
PySecDB is stored in `json` format, where each sample contains 5 keys and has the following format;
```json
{
"id": the id for each entry,
"label": the type of patch, value:"security" or "non-security",
"content": the content of patch, type: str,
"source": the source of patch, value: "MITRE" or "wild",
"CVE-ID": the CVE ID if it exists, value: "CVE-XXXX-XXXXX" or "NA",
}
```
## Disclaimer & Download Agreement<span id="jump"></span>
To download the PySecDB dataset, you must agree with the succeeding Disclaimer & Download Agreement items. You should carefully read the following terms before submitting the PySecDB request form.
- PySecDB is constructed and cross-checked by 3 experts that work in security patch research.
Due to the potential misclassification led by subjective factors, the Sun Security Laboratory (SunLab) cannot guarantee 100% accuracy for samples in the dataset.
- The copyright of the PySecDB dataset is owned by SunLab.
- The purpose of using PySecDB should be non-commercial research and/or personal use. The dataset should not be used for commercial use or any profitable purpose.
- The PySecDB dataset should not be re-sell or redistributed. Anyone who has obtained PySecDB should not share the dataset with others without permission from SunLab.
## Team
The PySecDB dataset is built by [Sun Security Laboratory](https://sunlab-gmu.github.io/) (SunLab) at [George Mason University](https://www2.gmu.edu/), Fairfax, VA.
## Citations
**If you are using PySecDB for work that will result in a publication (thesis, dissertation, paper, article), please use the following citation:**
```bibtex
@article{sun2023exploring,
title={Exploring Security Commits in Python},
author={Sun, Shiyu and Wang, Shu and Wang, Xinda and Xing, Yunlong and Zhang, Elisa and Sun, Kun},
journal={arXiv preprint arXiv:2307.11853},
year={2023}
}
```
|
Hack90/ncbi_genbank_part_34 | 2023-10-06T20:44:06.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33614209063
num_examples: 62458
download_size: 15102741772
dataset_size: 33614209063
---
# Dataset Card for "ncbi_genbank_part_34"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_203135 | 2023-10-06T20:31:36.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73705
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_203135"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Leeps/IIT_Affordances_2017 | 2023-10-06T20:36:04.000Z | [
"license:afl-3.0",
"region:us"
] | Leeps | null | null | null | 0 | 0 | ---
license: afl-3.0
---
|
tr416/dataset_20231006_203612 | 2023-10-06T20:36:14.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73868
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_203612"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
towhid/aesir-test2 | 2023-10-07T16:46:04.000Z | [
"region:us"
] | towhid | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 68
num_examples: 17
download_size: 707
dataset_size: 68
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "aesir-test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_35 | 2023-10-06T21:10:19.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31326431657
num_examples: 1222
download_size: 14219174128
dataset_size: 31326431657
---
# Dataset Card for "ncbi_genbank_part_35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_36 | 2023-10-06T21:37:05.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32019675075
num_examples: 1276
download_size: 14482789388
dataset_size: 32019675075
---
# Dataset Card for "ncbi_genbank_part_36"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_37 | 2023-10-06T22:04:57.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33117444507
num_examples: 1628
download_size: 14926056837
dataset_size: 33117444507
---
# Dataset Card for "ncbi_genbank_part_37"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sjdsafari/Nasdaq_full_1_month_2007_to_2023 | 2023-10-06T22:09:55.000Z | [
"license:apache-2.0",
"region:us"
] | Sjdsafari | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
evanfrick/rlaif | 2023-10-06T22:31:26.000Z | [
"region:us"
] | evanfrick | null | null | null | 0 | 0 | Entry not found |
Sjdsafari/Nasdaq_full_1_week_2007_to_2023 | 2023-10-06T22:02:48.000Z | [
"license:apache-2.0",
"region:us"
] | Sjdsafari | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Hack90/ncbi_genbank_part_25 | 2023-10-06T22:26:10.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33302381831
num_examples: 1638
download_size: 15008913174
dataset_size: 33302381831
---
# Dataset Card for "ncbi_genbank_part_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sjdsafari/Nasdaq_full_1_day_2007_to_2023 | 2023-10-06T22:07:15.000Z | [
"license:apache-2.0",
"region:us"
] | Sjdsafari | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Sjdsafari/Nasdaq_240min_2007_to_2023 | 2023-10-06T22:12:57.000Z | [
"license:apache-2.0",
"region:us"
] | Sjdsafari | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Sjdsafari/Nasdaq_60min_2007_to_2023 | 2023-10-06T22:51:20.000Z | [
"license:apache-2.0",
"region:us"
] | Sjdsafari | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Intuit-GenSRF/hate-speech-offensive-es | 2023-10-06T22:24:18.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 7320813
num_examples: 24783
download_size: 4759804
dataset_size: 7320813
---
# Dataset Card for "hate_speech_offensive-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/hackathon-somos-nlp-2023-suicide-comments-es-en | 2023-10-06T22:27:58.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 2629537
num_examples: 8824
download_size: 1693102
dataset_size: 2629537
---
# Dataset Card for "hackathon-somos-nlp-2023-suicide-comments-es-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_61 | 2023-10-06T22:45:22.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32323629638
num_examples: 119131
download_size: 14698797773
dataset_size: 32323629638
---
# Dataset Card for "ncbi_genbank_part_61"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_71 | 2023-10-07T00:00:16.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | Entry not found |
Intuit-GenSRF/tweets-hate-speech-detection-es | 2023-10-06T22:30:30.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 8933354
num_examples: 31962
download_size: 6104746
dataset_size: 8933354
---
# Dataset Card for "tweets_hate_speech_detection-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
1899Deposit38-ECV/Joaosaldanha | 2023-10-06T22:43:27.000Z | [
"region:us"
] | 1899Deposit38-ECV | null | null | null | 0 | 0 | Entry not found |
Hack90/ncbi_genbank_part_62 | 2023-10-06T23:11:13.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 23316507081
num_examples: 4572223
download_size: 10026708270
dataset_size: 23316507081
---
# Dataset Card for "ncbi_genbank_part_62"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZelaAI/test_repo | 2023-10-07T21:31:09.000Z | [
"region:us"
] | ZelaAI | null | null | null | 0 | 0 | Entry not found |
tr416/dataset_20231006_231107 | 2023-10-06T23:11:08.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73705
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_231419 | 2023-10-06T23:14:20.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73847
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231419"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_231926 | 2023-10-06T23:19:27.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73865
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_231926"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_232347 | 2023-10-06T23:23:48.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74080
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_232347"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_233006 | 2023-10-06T23:30:07.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73943
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233006"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_63 | 2023-10-06T23:35:56.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 12920615864
num_examples: 13999730
download_size: 5041172591
dataset_size: 12920615864
---
# Dataset Card for "ncbi_genbank_part_63"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_233701 | 2023-10-06T23:37:02.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74101
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233701"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_233908 | 2023-10-06T23:39:09.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73813
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233908"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_234030 | 2023-10-06T23:40:31.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73965
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_234030"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_64 | 2023-10-07T00:01:49.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 23605755944
num_examples: 1596418
download_size: 10216572338
dataset_size: 23605755944
---
# Dataset Card for "ncbi_genbank_part_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BEE-spoke-data/coedit-reworded-deduped | 2023-10-10T20:07:34.000Z | [
"license:apache-2.0",
"arxiv:2305.09857",
"region:us"
] | BEE-spoke-data | null | null | null | 0 | 0 | ---
license: apache-2.0
dataset_info:
- config_name: dedup-by-target
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23629242
num_examples: 79943
download_size: 11836738
dataset_size: 23629242
- config_name: default
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23629242
num_examples: 79943
download_size: 11836739
dataset_size: 23629242
configs:
- config_name: dedup-by-target
data_files:
- split: train
path: dedup-by-target/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
source_dataasets: chargoddard/coedit-reworded
---
# BEE-spoke-data/coedit-reworded-deduped
Minhash deduplication on the `target` column. Source data from [coedit-reworded](https://hf.co/chargoddard/coedit-reworded)
## Citation
Original dataset courtesy of Grammarly:
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-afc8c5-93509145863 | 2023-10-07T00:19:46.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- amazon_polarity
eval_info:
task: binary_classification
model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
metrics: []
dataset_name: amazon_polarity
dataset_config: amazon_polarity
dataset_split: test
col_mapping:
text: content
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
* Dataset: amazon_polarity
* Config: amazon_polarity
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AdamCodd](https://huggingface.co/AdamCodd) for evaluating this model. |
Hack90/ncbi_genbank_part_26 | 2023-10-07T00:35:15.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 34342592523
num_examples: 98914
download_size: 15597412855
dataset_size: 34342592523
---
# Dataset Card for "ncbi_genbank_part_26"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_65 | 2023-10-07T00:26:31.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 19753145244
num_examples: 1476991
download_size: 8576777094
dataset_size: 19753145244
---
# Dataset Card for "ncbi_genbank_part_65"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
teleprint-me/function-calling | 2023-10-07T02:06:25.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | teleprint-me | null | null | null | 0 | 0 | ---
license: cc-by-nc-sa-4.0
---
# Function Calling Dataset
## Description
The Function Calling Dataset is designed to facilitate the training of natural language models to understand and generate structured function calls within a conversational context. This dataset contains a collection of interactions where users engage in conversations with a virtual assistant and request specific actions or information using structured function calls.
## Dataset Contents
- Regular Conversations: The dataset includes standard conversational cues where users ask questions, make requests, or seek information.
- Structured Function Calls: Using the `[FUNC]` tag, the dataset provides examples of how to generate structured outputs, representing function calls. These function calls encompass various tasks and actions.
- Marrying Structure and Natural Language: Following each function call, there are natural language explanations or elaborations. This ensures that the dataset strikes a balance between structured and human-friendly responses.
## Usage
This dataset is intended for training and fine-tuning natural language models, enabling them to understand and generate structured function calls in response to user queries. It can be valuable for projects involving virtual assistants, chatbots, or any application where structured actions are required within a conversational context.
## License
This dataset is made available under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
---
|
naiviv1000/poncho | 2023-10-07T00:32:23.000Z | [
"license:apache-2.0",
"region:us"
] | naiviv1000 | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Hack90/ncbi_genbank_part_66 | 2023-10-07T00:47:28.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | Entry not found |
Hack90/ncbi_genbank_part_16 | 2023-10-07T00:50:32.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 9781284891
num_examples: 14048187
download_size: 4047367895
dataset_size: 9781284891
---
# Dataset Card for "ncbi_genbank_part_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
artdwn/stable-diffusion-backup | 2023-10-07T01:49:58.000Z | [
"arxiv:2211.06679",
"region:us"
] | artdwn | null | null | null | 0 | 0 | # Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
REDTMX/J | 2023-10-07T00:53:39.000Z | [
"license:apache-2.0",
"region:us"
] | REDTMX | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Hack90/ncbi_genbank_part_27 | 2023-10-07T01:12:17.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 32872955172
num_examples: 183931
download_size: 14877735882
dataset_size: 32872955172
---
# Dataset Card for "ncbi_genbank_part_27"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/tweet-eval-offensive-es | 2023-10-07T01:13:41.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 4941865
num_examples: 11519
download_size: 3088828
dataset_size: 4941865
---
# Dataset Card for "tweet_eval-offensive-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part0 | 2023-10-07T01:17:36.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 1 | 0 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1282993344
num_examples: 134592
download_size: 1493427157
dataset_size: 1282993344
---
# Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_17 | 2023-10-07T01:37:04.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 9388521444
num_examples: 13063434
download_size: 3796686652
dataset_size: 9388521444
---
# Dataset Card for "ncbi_genbank_part_17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.