id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
jjisd/pandanew | 2023-10-07T01:42:59.000Z | [
"region:us"
] | jjisd | null | null | null | 0 | 0 | Entry not found |
CHDCruze/ep1techwalkers | 2023-10-07T01:41:46.000Z | [
"region:us"
] | CHDCruze | null | null | null | 0 | 0 | - Arc Browser Logo
- Graphic representation of AI with GPT-3.5 and Anthropic’s Logos
- Visual of a Browser Tab
- Download Icon with a document being renamed
- Hover Action Icon
- ChatGPT Dialogue Box
- Prototyping or Draft Sketch
- Boosts Icon (upward arrow or power symbol)
- Picture of CEO Josh Miller (from the livestream if possible)
- Feedback/Comments Icon |
Fraol/DedupedRefDatasetWMetricFinal | 2023-10-07T20:04:05.000Z | [
"region:us"
] | Fraol | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
splits:
- name: train
num_bytes: 901652208.1944371
num_examples: 150671
download_size: 215554822
dataset_size: 901652208.1944371
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DedupedRefDatasetWMetricFinal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
artdwn/sd | 2023-10-10T10:47:06.000Z | [
"region:us"
] | artdwn | null | null | null | 0 | 0 | Entry not found |
tr416/dataset_20231007_023707 | 2023-10-07T02:37:07.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | Entry not found |
tr416/dataset_20231007_024059 | 2023-10-07T02:41:00.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73943
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024059"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_024249 | 2023-10-07T02:42:50.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73878
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024249"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_024410 | 2023-10-07T02:44:11.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73926
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024410"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_024626 | 2023-10-07T02:46:28.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73818
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024626"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_024652 | 2023-10-07T02:46:53.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74082
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024652"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_024754 | 2023-10-07T02:47:55.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73962
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024754"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025212 | 2023-10-07T02:52:14.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73961
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025212"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025331 | 2023-10-07T02:53:32.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73708
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025331"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025421 | 2023-10-07T02:54:22.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73904
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025421"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025600 | 2023-10-07T02:56:02.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74033
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025600"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025659 | 2023-10-07T02:57:00.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73961
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025659"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_025908 | 2023-10-07T02:59:09.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74279
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_025908"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_030433 | 2023-10-07T03:04:35.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73993
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030433"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_030710 | 2023-10-07T03:07:12.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73696
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030710"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_030826 | 2023-10-07T03:08:27.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73672
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_030826"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_031407 | 2023-10-07T03:14:08.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74702
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_031407"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_031447 | 2023-10-07T03:14:48.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73952
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_031447"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emozilla/proofpile-test-tokenized-mistral | 2023-10-07T03:18:31.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: train
num_bytes: 1647980074
num_examples: 46251
download_size: 554081392
dataset_size: 1647980074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "proofpile-test-tokenized-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_033121 | 2023-10-07T03:31:22.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74119
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033121"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_033301 | 2023-10-07T03:33:02.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74254
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033301"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_033400 | 2023-10-07T03:34:01.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74449
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_033716 | 2023-10-07T03:37:17.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73888
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033716"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_033835 | 2023-10-07T03:38:36.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73844
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033835"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231007_034029 | 2023-10-07T03:40:30.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73744
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_034029"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/hunterxhunter | 2023-10-07T10:42:11.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Hunter X Hunter
This is the image base of bangumi Hunter x Hunter, we detected 130 characters, 12906 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 3471 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 541 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 306 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 363 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 123 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 154 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 103 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 123 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 52 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 66 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 82 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 29 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 50 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 246 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 31 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 85 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 21 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 45 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 40 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 42 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 61 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 99 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 20 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 118 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 48 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 35 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 142 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 1450 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 43 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 98 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 39 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 42 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 67 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 17 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 27 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 34 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 14 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 15 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 41 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 22 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 24 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 49 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 38 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 19 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 24 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 236 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 57 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 64 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 34 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 62 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 24 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 21 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 12 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 107 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 18 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 745 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 133 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 277 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 33 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 110 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 65 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 24 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 22 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 35 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 65 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 106 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 49 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 21 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 45 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 67 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 50 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 15 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 52 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 32 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 16 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 11 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 21 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 31 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 38 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 15 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 49 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 13 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 15 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 18 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 16 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 122 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 22 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 16 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 57 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 45 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 20 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 10 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 30 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 14 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 134 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 21 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 26 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 69 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 8 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 17 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 18 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 5 | [Download](101/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 102 | 27 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 25 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 60 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 25 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 16 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 11 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 25 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 14 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 53 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 23 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 21 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 12 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 9 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 13 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 47 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 11 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 6 | [Download](118/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 119 | 6 | [Download](119/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 120 | 8 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 54 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 25 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 53 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 8 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 16 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 22 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 6 | [Download](127/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 128 | 57 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 236 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/codegeass | 2023-10-07T09:31:25.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Code Geass
This is the image base of bangumi Code Geass, we detected 136 characters, 10361 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 37 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 97 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 119 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 187 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 218 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 131 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 77 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 128 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 79 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 42 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 31 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 39 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 13 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 42 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 52 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 89 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 79 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 46 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 75 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 82 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 28 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 21 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 51 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 23 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 26 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 44 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 1363 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 21 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 31 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 109 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 20 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 16 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 178 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 26 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 778 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 16 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 44 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 61 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 71 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 14 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 116 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 20 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 20 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 113 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 298 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 19 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 43 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 141 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 13 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 23 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 48 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 20 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 36 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 19 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 14 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 16 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 17 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 90 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 33 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 17 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 27 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 197 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 19 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 43 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 591 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 44 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 73 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 60 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 151 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 22 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 20 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 74 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 20 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 54 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 26 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 28 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 30 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 14 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 13 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 55 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 12 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 165 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 11 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 185 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 72 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 9 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 32 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 39 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 120 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 126 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 18 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 44 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 10 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 6 | [Download](93/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 94 | 43 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 207 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 12 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 11 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 15 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 17 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 20 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 9 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 253 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 10 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 16 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 28 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 19 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 9 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 17 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 12 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 7 | [Download](110/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 111 | 11 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 20 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 17 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 10 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 9 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 22 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 308 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 423 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 19 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 7 | [Download](120/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 121 | 9 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 8 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 114 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 88 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 5 | [Download](125/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 126 | 10 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 14 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 8 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 5 | [Download](129/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 130 | 13 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| 131 | 14 | [Download](131/dataset.zip) |  |  |  |  |  |  |  |  |
| 132 | 7 | [Download](132/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 133 | 30 | [Download](133/dataset.zip) |  |  |  |  |  |  |  |  |
| 134 | 7 | [Download](134/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 348 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
ziqin/autotrain-data-test | 2023-10-07T03:52:33.000Z | [
"task_categories:image-classification",
"region:us"
] | ziqin | null | null | null | 0 | 0 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<380x254 RGB PIL image>",
"target": 0
},
{
"image": "<339x254 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Mountain', 'sea', 'snow'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6 |
| valid | 3 |
|
alokps/hf-github-issues-comments-cat | 2023-10-07T03:47:00.000Z | [
"region:us"
] | alokps | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: html_url
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: body
dtype: string
- name: comment_length
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 23118062
num_examples: 3907
download_size: 5079082
dataset_size: 23118062
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-github-issues-comments-cat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smokiazo/gsgshshsshgsgss | 2023-10-07T03:53:50.000Z | [
"region:us"
] | smokiazo | null | null | null | 0 | 0 | Entry not found |
alokps/hf-github-issues-comments-embeddings | 2023-10-07T03:52:15.000Z | [
"region:us"
] | alokps | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: html_url
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: body
dtype: string
- name: comment_length
dtype: int64
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 35135994
num_examples: 3907
download_size: 18199957
dataset_size: 35135994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-github-issues-comments-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
toninhodjj/dudola | 2023-10-07T04:05:26.000Z | [
"region:us"
] | toninhodjj | null | null | null | 0 | 0 | Entry not found |
Phoenixrayne6/TaylorGrodin-Paintbrush-III-RVC | 2023-10-07T04:20:48.000Z | [
"license:gpl",
"region:us"
] | Phoenixrayne6 | null | null | null | 0 | 0 | ---
license: gpl
---
|
BangumiBase/jojonokimyounabouken | 2023-10-07T13:24:52.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Jojo No Kimyou Na Bouken
This is the image base of bangumi JoJo no Kimyou na Bouken, we detected 137 characters, 14828 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 68 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 351 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 101 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 188 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 92 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 57 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 161 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 647 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 144 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 821 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 68 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 220 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 942 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 37 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 336 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 274 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 562 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 225 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 151 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 48 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 613 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 132 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 207 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 151 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 58 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 679 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 527 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 86 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 27 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 45 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 47 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 87 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 138 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 48 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 45 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 30 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 108 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 175 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 33 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 133 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 81 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 121 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 26 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 65 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 36 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 15 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 41 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 71 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 20 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 29 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 163 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 172 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 43 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 393 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 63 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 36 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 52 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 80 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 38 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 48 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 51 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 33 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 38 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 21 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 39 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 26 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 62 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 27 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 19 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 43 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 87 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 82 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 30 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 52 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 70 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 41 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 39 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 433 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 18 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 12 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 18 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 146 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 19 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 72 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 28 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 29 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 48 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 176 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 63 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 38 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 47 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 87 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 33 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 43 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 63 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 238 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 30 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 47 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 36 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 36 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 26 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 32 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 23 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 22 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 31 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 208 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 15 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 48 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 18 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 70 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 50 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 21 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 9 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 70 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 26 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 23 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 21 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 27 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 17 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 368 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 16 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 43 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 30 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 25 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 38 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 42 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 29 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 10 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 23 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 9 | [Download](129/dataset.zip) |  |  |  |  |  |  |  |  |
| 130 | 13 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| 131 | 10 | [Download](131/dataset.zip) |  |  |  |  |  |  |  |  |
| 132 | 16 | [Download](132/dataset.zip) |  |  |  |  |  |  |  |  |
| 133 | 23 | [Download](133/dataset.zip) |  |  |  |  |  |  |  |  |
| 134 | 10 | [Download](134/dataset.zip) |  |  |  |  |  |  |  |  |
| 135 | 9 | [Download](135/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 382 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
nc33/task2 | 2023-10-09T03:19:19.000Z | [
"region:us"
] | nc33 | null | null | null | 0 | 0 | ---
dataset_info:
config_name: train
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 608247671
num_examples: 241983
download_size: 144867080
dataset_size: 608247671
configs:
- config_name: train
data_files:
- split: train
path: train/train-*
---
# Dataset Card for "task2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
testing445/HUN | 2023-10-07T05:00:39.000Z | [
"region:us"
] | testing445 | null | null | null | 0 | 0 | Entry not found |
BangumiBase/narutomovies | 2023-10-07T06:32:17.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Naruto [movies]
This is the image base of bangumi NARUTO [Movies], we detected 37 characters, 3111 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1040 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 44 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 34 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 40 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 90 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 120 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 49 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 107 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 19 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 72 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 75 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 86 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 37 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 35 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 75 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 115 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 44 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 38 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 68 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 158 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 19 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 45 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 45 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 247 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 18 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 9 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 15 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 11 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 6 | [Download](29/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 30 | 60 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 87 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 11 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 35 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 16 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 7 | [Download](35/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 111 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
emozilla/Long-Data-Collections-Pretrain-Without-Books | 2023-10-09T14:32:21.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 236565210292
num_examples: 9383848
download_size: 25749677954
dataset_size: 236565210292
---
# Dataset Card for "Long-Data-Collections-Pretrain-Without-Books"
Paraquet version of the pretrain split of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) WITHOUT books
Statistics (in # of characters): `total_len: 236088622215, average_len: 25159.041601590307`
|
keshavsudo007/massive | 2023-10-07T05:51:38.000Z | [
"region:us"
] | keshavsudo007 | null | null | null | 0 | 0 | Entry not found |
Jaredquek/AuroMiraWorks | 2023-10-07T07:35:47.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"language:en",
"license:mit",
"philosophy",
"religion",
"spirituality",
"occult",
"indian philosophy",
"hinduism",
"region:us"
] | Jaredquek | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- question-answering
- conversational
language:
- en
tags:
- philosophy
- religion
- spirituality
- occult
- indian philosophy
- hinduism
---
This 'text completion' dataset (originally in jsonl format) comprises the major prose works of Sri Aurobindo, the Indian philosopher, seer and poet, and his spiritual partner, Mirra Alfassa. The following works have been used:
### Sri Aurobindo:
- Letters on Yoga 1, 2, 3, 4
- Letters on Himself and the Ashram
- The Mother with Letters on the Mother
- The Life Divine
- The Synthesis of Yoga
- The Renaissance in India
- The Secret of the Veda
- Essays Divine and Human
- Essays on the Gita
- Essays in Philosophy and Yoga
- The Future Poetry
- The Human Cycle
- Isha Upanishad
### Mirra (the Mother's):
- Questions and Answers (all volumes)
- Prayers and Meditation
- On Education
- On Thoughts and Aphorisms
- Words of the Mother (all volumes)
The titles of books have been removed to reduce hallucinatory misquotes. We believe this dataset is useful to train AIs to converse on spiritual and philosophical topics, as Sri Aurobindo's writings relate a deep and complex spiritual philosophy to all areas of life and thought.
Anyone interested in datasets by individual books (or in building 'spiritual AIs') - please message me at my Twitter account [@jared_quek](https://twitter.com/jared_quek).
|
lakelz/myds2-bpg | 2023-10-07T07:14:30.000Z | [
"region:us"
] | lakelz | null | null | null | 0 | 0 | This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0 |
lighteval/trivia_qa | 2023-10-07T07:35:06.000Z | [
"region:us"
] | lighteval | null | null | null | 0 | 0 | ---
dataset_info:
- config_name: default
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 106882730
num_examples: 138384
- name: validation
num_bytes: 14059830
num_examples: 17944
- name: test
num_bytes: 3667903
num_examples: 17210
download_size: 63926518
dataset_size: 124610463
- config_name: rc.nocontext
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 106882730
num_examples: 138384
- name: validation
num_bytes: 14059830
num_examples: 17944
- name: test
num_bytes: 3667903
num_examples: 17210
download_size: 63926518
dataset_size: 124610463
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: rc.nocontext
data_files:
- split: train
path: rc.nocontext/train-*
- split: validation
path: rc.nocontext/validation-*
- split: test
path: rc.nocontext/test-*
---
# Dataset Card for "trivia_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CWKSC/common_voice_13_0-zh-HK-whisper-small | 2023-10-07T08:05:00.000Z | [
"region:us"
] | CWKSC | null | null | null | 1 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 13463992440
num_examples: 14018
- name: test
num_bytes: 5371995872
num_examples: 5593
download_size: 0
dataset_size: 18835988312
---
# Dataset Card for "common_voice_13_0-zh-HK-whisper-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chargoddard/rpguild | 2023-10-11T01:12:02.000Z | [
"region:us"
] | chargoddard | null | null | null | 0 | 0 | ---
dataset_info:
- config_name: default
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 1921588254
num_examples: 140469
download_size: 764073630
dataset_size: 1921588254
- config_name: high_confidence
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 949419370.7676569
num_examples: 69403
download_size: 386317057
dataset_size: 949419370.7676569
- config_name: pruned
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 782484734.2032762
num_examples: 57200
download_size: 326987882
dataset_size: 782484734.2032762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: high_confidence
data_files:
- split: train
path: high_confidence/train-*
- config_name: pruned
data_files:
- split: train
path: pruned/train-*
---
# Dataset Card for "rpguild"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Oikawakaki/Ming-Landsape-Painting | 2023-10-07T08:51:50.000Z | [
"art",
"region:us"
] | Oikawakaki | null | null | null | 1 | 0 | ---
tags:
- art
--- |
Weyaxi/results | 2023-10-07T18:09:00.000Z | [
"region:us"
] | Weyaxi | null | null | null | 0 | 0 | Entry not found |
106306046derrick/name_of_your_dataset | 2023-10-07T10:18:37.000Z | [
"region:us"
] | 106306046derrick | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 11058568885.924
num_examples: 1390436
- name: validation
num_bytes: 834808939.688
num_examples: 158152
download_size: 11785641447
dataset_size: 11893377825.612
---
# Dataset Card for "name_of_your_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Antonio49/444 | 2023-10-07T09:04:39.000Z | [
"region:us"
] | Antonio49 | null | null | null | 0 | 0 | Entry not found |
chunpingvi/dataset_tone3 | 2023-10-07T09:26:34.000Z | [
"region:us"
] | chunpingvi | null | null | null | 0 | 0 | Entry not found |
wuming156/hassakuXLSfwNsfw_alpha05 | 2023-10-07T10:10:26.000Z | [
"region:us"
] | wuming156 | null | null | null | 0 | 0 | Entry not found |
vladman-25/flickr-30k-romanian-captions | 2023-10-07T10:54:18.000Z | [
"license:unknown",
"region:us"
] | vladman-25 | null | null | null | 0 | 0 | ---
license: unknown
---
# Dataset Card for Flickr 30k Romanian Captions
### Dataset Summary
This dataset is a translation in romanian of the flickr 30k captions dataset.
This was generated using [nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B), with Hugging face for both tokenization and translation.
Observations:
* the translation keeps the context pretty well.
* there are a few grammatical errors: "Doi tineri sare peste un balustradă"
* some translations are silly: "Un bărbat ţine o jucărie mare de leu împăiat.", "Un bărbat cu barbă care poartă un dulap."
### Languages
romanian |
MARADA/Alexandre | 2023-10-07T10:48:31.000Z | [
"license:apache-2.0",
"region:us"
] | MARADA | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Mxode/C-Language-Chat-Debug-Multiturn-Zh | 2023-10-07T10:52:26.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"language:zh",
"license:apache-2.0",
"code",
"C",
"chat",
"assistant",
"multiturn",
"debug",
"region:us"
] | Mxode | null | null | null | 0 | 0 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- zh
tags:
- code
- C
- chat
- assistant
- multiturn
- debug
---
约 1300 条 **C 语言** 场景的 user - assistant **多轮对话**。每段对话已经组织成了单行的格式。一条样例如下:
```JSON
{
"id": 1045,
"conversation": [
{
"user": "你好,AI助手。我最近在写一个C语言程序,但是遇到了一些问题,希望你能帮我检查一下。",
"assistant": "你好,我很乐意帮助你。请把你的代码发给我,我会尽快检查并给出建议。"
},
{
"user": "好的,这是我的代码。这段代码的主要功能是计算斐波那契数列的前n项。",
"assistant": "让我看一下......嗯,这里有一个小错误。在第10行,你应该使用`++i`而不是`i++`来递增i的值。修改后的代码应该是这样的\\n```c\\nfor (int i = 0; i < n; ++i) {\\n if (i == 0 || i == 1) {\\n printf(\\\"%d \\\", fib[i]);\\n } else {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n printf(\\\"%d \\\", fib[i]);\\n }\\n}\\n```"
},
{
"user": "谢谢你的帮助!我还有一个问题。这段代码中有一个递归函数,用于计算斐波那契数列。但是我发现这个递归函数的效率很低,因为它会重复计算很多相同的子问题。有什么方法可以优化这个递归函数吗?",
"assistant": "当然可以。你可以尝试使用动态规划的方法来优化这个递归函数。这样可以将已经计算过的子问题的解存储起来,避免重复计算。下面是一个使用动态规划优化后的斐波那契数列计算函数\\n```c\\n#include <stdio.h>\\nint fib[100]; // 用于存储斐波那契数列的数组\\nvoid fib_dp(int n) {\\n fib[0] = 0;\\n fib[1] = 1;\\n for (int i = 2; i <= n; ++i) {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n }\\n}\\nint main() {\\n int n = 10; // 你想要计算的斐波那契数列的项数\\n fib_dp(n);\\n for (int i = 0; i <= n; ++i) {\\n printf(\\\"%d \\\", fib[i]);\\n }\\n return 0;\\n}\\n```"
},
{
"user": "哇,这真是太棒了!你的建议大大提高了程序的运行速度。再次感谢你的帮助!\n",
"assistant": ""
}
]
}
```
所有的助手称呼均为“AI助手”,如果想要构建特定名称,**可以全局替换“AI助手”。** |
Weyaxi/requests | 2023-10-07T10:54:23.000Z | [
"region:us"
] | Weyaxi | null | null | null | 0 | 0 | # Copy of the h4 queue repo
Contains info for launching a model on the cluster to be evaluated with ligtheval |
mrsearchwolf/cricket-wiki | 2023-10-09T10:33:33.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"cricket",
"region:us"
] | mrsearchwolf | null | null | null | 1 | 0 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- cricket
size_categories:
- 10K<n<100K
---
# cricket-wiki
```bash
# data preparation steps
pip install wikiextractor
wget -c https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles-multistream.xml.bz2
# extract files using wikiextractor (take a few hours)
python3 -m wikiextractor.WikiExtractor enwiki-latest-pages-articles-multistream.xml.bz2 --json
# get cricket records in a separate file
# take a few minutes
grep -i cricket text/*/* > cricket.jsonl
``` |
MARADA/Alexandra | 2023-10-07T11:08:26.000Z | [
"license:openrail",
"region:us"
] | MARADA | null | null | null | 0 | 0 | ---
license: openrail
---
|
LuminanceImagine/SiaFullVocalstems | 2023-10-07T11:09:55.000Z | [
"region:us"
] | LuminanceImagine | null | null | null | 0 | 0 | Entry not found |
infCapital/vietllama-tiny-envi | 2023-10-09T09:02:41.000Z | [
"task_categories:question-answering",
"language:vi",
"language:en",
"license:apache-2.0",
"region:us"
] | infCapital | null | null | null | 0 | 0 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- vi
- en
---
+ Instruction dataset for fine-tuning
+ Dataset contains original dataset [lima, orca-mini, alpaca data, alpaca finance, GPTeacher] and their Vietnamese translations
+ Suggested use cases: Fine-tuning Vietnamese LLM |
mjbuehler/Mistral_v102Mistreal | 2023-10-07T11:52:39.000Z | [
"region:us"
] | mjbuehler | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 29736437
num_examples: 22282
download_size: 13450961
dataset_size: 29736437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mistral_v102Mistreal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/joangaes-depression-es | 2023-10-07T12:00:23.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 31578828
num_examples: 25828
download_size: 19244935
dataset_size: 31578828
---
# Dataset Card for "joangaes-depression-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stepkurniawan/test | 2023-10-08T09:34:08.000Z | [
"region:us"
] | stepkurniawan | null | null | null | 0 | 0 | ---
dataset_info:
- config_name: default
features:
- name: pokemon
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 43
num_examples: 2
download_size: 1215
dataset_size: 43
- config_name: starters
features:
- name: pokemon
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 65
num_examples: 3
download_size: 1235
dataset_size: 65
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: starters
data_files:
- split: train
path: starters/train-*
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gjoy/validate_classifier_retry | 2023-10-07T12:20:59.000Z | [
"region:us"
] | gjoy | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-d9292a-93577145879 | 2023-10-07T12:39:22.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
DGurgurov/mental_disorders_data | 2023-10-07T13:08:58.000Z | [
"region:us"
] | DGurgurov | null | null | null | 0 | 0 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc/guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
### Dataset Summary
This repository comprises three distinct datasets:
1. **Character Descriptions from 12 Authors across 4 Countries**
Character descriptions authored by a diverse group of writers.
2. **Character Descriptions labeled with Mental Disorders using Cosine Similarity**
Character descriptions annotated with labels corresponding to mental disorders.
3. **Data on Mental Disorders**
Data pertaining to mental disorders.
### Languages
- English
## Dataset Creation
- Character Descriptions datasets were generated by initiating prompts with ChatGPT.
- Character Descriptions were annotated using Cosine Similarity.
- Data on Mental Disorders was extracted through scraping the International Classification of Diseases (ICD).
### Contributors
- Daniil Gurgurov
- Nursulu Sagimbayeva
- Antonia Wächter
- Asmaa Ibrahim
### Additional Details
- Project Website: [Link to Project Website](https://d-gurgurov.github.io/projects/project1.html)
- Deep Learning for Literary Analysis Report: [Link to Report](https://github.com/d-gurgurov/dl_for_text_analysis/blob/main/Deep_Learning_for_Literary_Analysis_Report.pdf)
|
MaratKhabibullin/ma_qa_large_sentences | 2023-10-09T07:35:23.000Z | [
"region:us"
] | MaratKhabibullin | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 251028
num_examples: 778
- name: validation
num_bytes: 247985
num_examples: 779
download_size: 227946
dataset_size: 499013
---
# Dataset Card for "ma_qa_large_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MaratKhabibullin/ma_qa_large_sentences_full | 2023-10-09T07:35:29.000Z | [
"region:us"
] | MaratKhabibullin | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 499013
num_examples: 1557
download_size: 203570
dataset_size: 499013
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ma_qa_large_sentences_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erecprime/ErecPrime | 2023-10-07T13:41:57.000Z | [
"region:us"
] | erecprime | null | null | null | 0 | 0 | Patients undergoing andropause often do not choose these therapies, opting instead to try dietary supplements. As millions of baby boomers are currently experiencing andropause, marketers offer hundreds of products allegedly beneficial in reversing impotence and enhancing male sexual performance.
ErecPrime Male Enhancement
ErecPrime Male Enhancement Pills
ErecPrime Male Enhancement Reviews
Erec Prime Male Enhancement
https://www.supplementz.org/erecprime-male-enhancement/
https://www.supplementz.org/animale-me-capsules/
https://www.supplementz.org/vista-keto-acv-gummies/
https://www.supplementz.org/ |
tr416/tommys_mad_model_dataset_20231007_141121 | 2023-10-07T14:11:22.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74087
dataset_size: 770400.0
---
# Dataset Card for "tommys_mad_model_dataset_20231007_141121"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/test_dataset_20231007_143435 | 2023-10-07T14:34:37.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73765
dataset_size: 770400.0
---
# Dataset Card for "test_dataset_20231007_143435"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
meta-math/MetaMathQA-40K | 2023-10-07T15:21:40.000Z | [
"license:apache-2.0",
"arxiv:2309.12284",
"region:us"
] | meta-math | null | null | null | 2 | 0 | ---
license: apache-2.0
---
arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/ |
ayoubkirouane/news_summary | 2023-10-07T16:08:54.000Z | [
"region:us"
] | ayoubkirouane | null | null | null | 0 | 0 | Entry not found |
leharris3/basketball-shooting-video-classification | 2023-10-07T16:14:09.000Z | [
"region:us"
] | leharris3 | null | null | null | 0 | 0 | Entry not found |
Alfaxad/BioGalacticModels-Zoo | 2023-10-08T14:07:22.000Z | [
"region:us"
] | Alfaxad | null | null | null | 0 | 0 | 
# 🌌 BioGalacticModels Zoo
## **🔭 Overview**
### **🛰️ Space Biology Datasets And Models Hub**
The frontier of space biology research is vast, with uncharted territories that hide the secrets of life beyond our blue planet. 🌍 With the increasing need for accurate and reliable methods to understand and decode the effects of space on biology, machine learning, particularly transfer learning, emerges as a promising approach. 🧠 This repository serves as a nexus between space biology and computational methodologies, aimed at harnessing the power of transfer learning for space biology applications. 💡
We present to you a comprehensive database of publicly available biomedical datasets and models that can be used to further space-biology research and discovery. 🌠
### **🚀 Purpose and Scope**
This repository is designed to:
1. **Centralize Resources**: 📚 Provide a curated collection of GeneLab datasets tailored for space biology studies, ranging from whole genome sequencing to DNA methylation.
2. **Promote Transfer Learning**: 🎓 Offer pre-trained models suitable for transfer learning.
3. **Streamline Data Processing**: ⚙️ Offer code samples and scripts for efficient dataset management.
4. **Facilitate Collaboration**: 🤝 Foster collaboration amongst researchers in the field.
5. **Reference Architectures**: 🗺️ Navigate through transfer learning architectures with ease.
### **🎯 Intended Audience**
This hub is for:
- **Space Biologists**: 🔬 Integrating computational methodologies.
- **Data Scientists & Machine Learning Enthusiasts**: 💻 Tackling challenges in space biology.
- **Students & Educators**: 📖 Accessing resources for computational space biology.
### **✉️ Contributing and Feedback**
We believe in community-driven science. 💖 Your contributions are warmly welcomed! By joining hands, we can venture further into the mysteries of space biology.
---
🛸 Join us in this interstellar journey of melding computation and space biology, steering the future of life in space.
## 📜 Table of Contents
- [BioGalactic Models](#🌠-biogalactic-models)
- [Datasets](#🧬-datasets)
- [Insights On BioGalacticModels Zoo Usage & Exploration](#💭-insights-on-biogalacticmodels-zoo-usage-&-exploration)
- [Transfer Learning Model Architectures for Space Biology](#🌐-promising-transfer-learning-model-architectures-for-space-biology)
- [Demo: Predicting Viral Host based on Metagenomic Features](#🧪-demo-predicting-viral-host-based-on-metagenomic-features)
---
## 🌠 BioGalactic Models
[BioGalactic Models](https://huggingface.co/spaces/Alfaxad/BioGalacticModels) 🌌 is a dedicated Hugging Face space containing a curated collection of Biology & Biochemistry Foundation Models.
**Significance to the BioGalactic Model Zoo**:
- **Ready-to-use Models**: 🚀 These models are pre-trained, optimized for transfer learning tasks.
- **Diverse Applications**: 🎯 Focused on Biology & Biochemistry, catering to space biology.
- **Continuous Evolution**: 🔄 As space biology progresses, this space will evolve.
**Impacting Space Biology Exploration**:
The models provide insights driving our understanding of life in space conditions. These include:
- Decoding genomic sequences.
- Predicting protein structures and interactions.
- Analyzing metabolic pathways in space.
---
## 🧬 Datasets
Dive into the curated datasets, specifically tailored for space biology studies. These datasets, coming directly from the vaults of NASA's GeneLab, cover a range of biological investigations relevant to space.
### **Whole Genome Sequencing Datasets**
1. [Microbiome profiling of feces from mice flown on the RR-10 mission](https://osdr.nasa.gov/bio/repo/data/studies/OSD-466)
2. [Metagenome profiling of feces from mice flown on the RR-23 mission](https://osdr.nasa.gov/bio/repo/data/studies/OSD-465)
3. [Whole genome sequencing and assembly of Eukaryotic microbes isolated from ISS environmental surface, Kirovograd region soil, Chernobyl Nuclear Power Plant and Chernobyl Exclusion Zone](https://osdr.nasa.gov/bio/repo/data/studies/OSD-132)
4. [Draft Genome Sequences of novel Agrobacterium genomospecies 3 Associated from the International Space Station](https://osdr.nasa.gov/bio/repo/data/studies/OSD-306)
5. [Metagenomic analysis of feces from mice flown on the RR-6 mission](https://osdr.nasa.gov/bio/repo/data/studies/OSD-249)
6. [Insta-Deep's Multi-species genome dataset](https://huggingface.co/datasets/InstaDeepAI/multi_species_genomes)
### **DNA Methylation Datasets**
1. [Changes in DNA Methylation in Arabidopsis thaliana Plants Exposed Over Multiple Generations to Gamma Radiation](https://osdr.nasa.gov/bio/repo/data/studies/OSD-520)
2. [Characterization of Epigenetic Regulation in an Extraterrestrial Environment: The Arabidopsis Spaceflight Methylome](https://osdr.nasa.gov/bio/repo/data/studies/OSD-217)
3. [Ionizing radiation induces transgenerational effects of DNA methylation in zebrafish](https://osdr.nasa.gov/bio/repo/data/studies/OSD-524)
4. [Methylome Analysis of Arabidopsis Seedlings Exposed to Microgravity](https://osdr.nasa.gov/bio/repo/data/studies/OSD-220)
For an exhaustive list of datasets and other resources, explore [NASA's Open Science Data Repository (OSDR)](https://osdr.nasa.gov/bio/repo/search?q=&data_source=cgene,alsda&data_type=study).
## **Bulk Downloading GeneLab Datasets with genelab-utils**
### **Quick Usage Guide**
# GeneLab utils
Some helper programs for [NASA GeneLab](https://genelab.nasa.gov/), such as `GL-download-GLDS-data` for downloading files from a specific OSD or GLDS ID, and `GL-get-workflow` for downloading workflows used by [GeneLab for processing datasets](https://github.com/nasa/GeneLab_Data_Processing).
## Conda install
The genelab-utils package should be installed with conda/mamba. If you are not familiar with conda, you can find an introduction [here](https://astrobiomike.github.io/unix/conda-intro) if wanted, and if you are not familiar with mamba, there is a super-short introduction on that same page [here](https://astrobiomike.github.io/unix/conda-intro#bonus-mamba-no-5) if wanted – it's definitely worth using mamba if you use conda at all :+1:
```bash
conda install -c conda-forge -n base mamba
mamba create -n genelab-utils -c conda-forge -c bioconda -c defaults -c astrobiomike genelab-utils
conda activate genelab-utils
```
All programs are prefixed with `GL-` and have a help menu accessible with `-h`. Version info can be accessed with `GL-version`.
## Some example pages
- Programmatically downloading [GLDS data](https://genelab-data.ndc.nasa.gov/genelab/)
- [`GL-download-GLDS-data`](https://hackmd.io/@astrobiomike/using-genelab-utils-to-download-GLDS-data)
- Downloading GeneLab workflows
- [`GL-get-workflow`](https://hackmd.io/@astrobiomike/using-genelab-utils-to-download-workflows)
---
## 💭 Insights On BioGalacticModels Zoo Usage & Exploration
### **1. Preprocessing**
For transfer learning these biomedical datasets may require various preprocessing steps depending on their source and format:
- **Data Cleaning:** Removing noise and inconsistencies.
- **Normalization:** Scaling features to a standard range.
- **Data Augmentation:** Especially for image datasets, augmenting data can help improve model robustness.
- **Feature Selection/Extraction:** Especially in genomics, where dimensionality can be very high.
- **Handling Imbalances:** In some datasets, certain classes may be underrepresented.
- **Format Conversion:** Datasets might need to be converted to formats compatible with machine learning frameworks.
### **3. Potential Multimodal Data Combinations for Space Biology Knowledge Gain**
Combining different types of datasets, like genomic, proteomic, and transcriptomic data, can provide a holistic view of biological systems.
Additionally, integrating imaging data with molecular data can enhance our understanding of spatial-temporal patterns.
Multi-modal datasets can help discover patterns or signals that might not be evident when analyzing data types in isolation.
#### a. **Genomic & Transcriptomic Data**:
- **Why**: While genomic data (like Whole Genome Sequencing) provides the blueprint of life, transcriptomic data offers insights into gene expression under specific conditions. Combining both can help in understanding the genetic basis of responses to space environments and how genes are expressed differently in space.
#### b. **Proteomic & Metabolomic Data**:
- **Why**: Proteomic data tells us about the proteins produced, while metabolomic data provides information on the small molecules in an organism. Together, they can offer insights into the functional state of cells in space, revealing which proteins are active and what metabolic pathways they're influencing.
#### c. **Transcriptomic & Metabolomic Data**:
- **Why**: This combination can correlate gene expression with metabolic changes. It can be particularly insightful to understand how gene expression changes influence metabolic responses in space conditions.
#### d. **Genomic & Phenotypic Data**:
- **Why**: Connecting the genetic makeup with observable traits (phenotypes) can help in predicting how specific genetic variations might influence an organism's ability to thrive in space.
#### e. **Imaging & Transcriptomic Data**:
- **Why**: While transcriptomic data reveals gene expression, imaging (like MRI or microscopy) can show structural or functional changes in tissues or cells. Combined, they can link gene expression patterns with visual manifestations.
#### f. **Epigenomic & Transcriptomic Data**:
- **Why**: Epigenomic data, like DNA Methylation, reveals changes in gene activity not caused by DNA sequence changes. By combining it with transcriptomic data, one can understand how space conditions might epigenetically influence gene expression.
#### g. **Genomic & Proteomic Data**:
- **Why**: This combination can be used to understand the translation of genes to proteins under space conditions, offering insights into post-transcriptional modifications in space.
#### h. **Environmental Data & Any Biological Data**:
- **Why**: Combining data on the space environment (like radiation levels or microgravity conditions) with any biological dataset can help correlate external conditions with biological responses.
The task of organizing multimodal datasets may face the following challenges:
1. **Data Integration**: Combining data from different sources and modalities can be challenging due to differences in scale, resolution, and format.
2. **Interpretability**: While multi-modal data can provide richer insights, it can also make interpretations complex.
3. **Computational Needs**: Integrating and analyzing multi-modal data often requires robust computational resources and specialized algorithms.
However, the potential insights gained from such combinations, especially in understanding the complex biological responses to space conditions, can be invaluable.
Leveraging transfer learning with models pretrained on diverse biomedical datasets and refined on space biology datasets can significantly boost the knowledge derived from these multi-modal combinations.
---
## 🌐 Promising Transfer Learning Model Architectures for Space Biology
The deep learning domain has birthed numerous architectures tailor-made for transfer learning. These models, having trained on expansive datasets, excel at grasping general features, which can be specialized for niche tasks, such as those in space biology. Here's a selection of architectures ripe for exploration in this challenge:
### 1. **Convolutional Neural Networks (CNNs)**:
Primarily efficient for image-centric data.
- **VGG (e.g., VGG16, VGG19)**: Crafted by the Visual Geometry Group, it's a staple for image recognition.
- **ResNet**: Features skip connections, countering the vanishing gradient dilemma in deep structures.
- **Inception (or GoogLeNet)**: Employs varied convolution sizes for multi-scale detail capture.
- **DenseNet**: Innovatively links each layer to every subsequent one in a feed-forward manner.
### 2. **Transformers**:
Originally for NLP, but have branched out to other areas like imagery.
- **BERT**: Tailored for NLP, it's versatile for text-oriented tasks.
- **ViT (Vision Transformer)**: Modifies the transformer design for visual tasks.
### 3. **Recurrent Neural Networks (RNNs)**:
Best suited for sequences such as time-series or biological sequences.
- **LSTM**: Counters the standard RNN's vanishing gradient issue.
- **GRU**: A streamlined LSTM variant.
### 4. **Autoencoders**:
For unsupervised learning, adept at feature extraction from unlabeled content.
- **Variational Autoencoders (VAEs)**: Introduces a probabilistic layer to autoencoders, frequently in generative scenarios.
### 5. **Generative Adversarial Networks (GANs)**:
Ideal for dataset augmentation, synthesizing data resembling the original distribution.
### 6. **U-Net**:
Conceived for biomedical image segmentation, amalgamating a context-capturing contractive route with a precision-centric expanding one.
### 7. **Capsule Networks**:
Navigates the spatial hierarchy between simple and intricate objects in visuals, potentially invaluable for intricate biological imaging.
### 8. **EfficientNet**:
Balances network breadth, depth, and clarity using fixed scaling coefficients, creating potentially smaller yet more precise models.
### 9. **BioBERT**:
A BERT variant pre-trained on biomedical datasets, apt for biology-centered tasks.
### 10. **AlphaFold**:
By DeepMind, it revolutionizes protein structure prediction, a seminal biological conundrum.
### **Recommendations**:
- For the unique aspects of space biology, initiating with biomedically proven architectures like U-Net could be fruitful.
- LSTMs or GRUs, being RNN derivatives, could be promising for genomic or other sequential datasets.
- GANs might be instrumental for data augmentation or crafting synthetic examples to enrich datasets.
- For challenges surrounding protein structures or other molecular biology facets, models like AlphaFold are worthy contenders.
---
## 🧪 Demo: Predicting Viral Host based on Metagenomic Features
In this repository,we also explore a demo using metagenomic features extracted from viral genomes to predict the virus host. Features include Genome size, GC%, and count of CDS. These serve as the independent variables to predict the viral host.
An SVM (Support Vector Machine) model is used, achieving an accuracy rate of 86%. Dive deeper into the methods, data preprocessing, and results [here](https://huggingface.co/datasets/Alfaxad/Space-Biology-Model-Zoo/blob/main/viral_host_demo/predict-viral-host-based-on-meta-genomic-features.ipynb).
|
famepram/llama-2-jk48-demo | 2023-10-07T16:51:12.000Z | [
"license:other",
"region:us"
] | famepram | null | null | null | 0 | 0 | ---
license: other
license_name: readme.md
license_link: LICENSE
---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
# Dataset Card for "Llama-2-JKT48-FP"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on JKT$* knowledges.
The dataset is created for exercising training llama2. |
putrirAHAYU/inin | 2023-10-07T17:29:19.000Z | [
"region:us"
] | putrirAHAYU | null | null | null | 0 | 0 | Entry not found |
jmontana1980/repo_new | 2023-10-07T17:15:38.000Z | [
"region:us"
] | jmontana1980 | null | null | null | 0 | 0 | Entry not found |
tr416/test_dataset_20231007_171958 | 2023-10-07T17:20:00.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73618
dataset_size: 770400.0
---
# Dataset Card for "test_dataset_20231007_171958"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/test2_dataset_20231007_172035 | 2023-10-07T17:20:36.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73851
dataset_size: 770400.0
---
# Dataset Card for "test2_dataset_20231007_172035"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yyy1227/test | 2023-10-07T17:24:05.000Z | [
"region:us"
] | yyy1227 | null | null | null | 0 | 0 | Entry not found |
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part1 | 2023-10-07T17:27:39.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 1 | 0 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1281395185
num_examples: 134495
download_size: 1491856532
dataset_size: 1281395185
---
# Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tychos/arxiv-embeddings-ada-002 | 2023-10-08T22:19:55.000Z | [
"license:mit",
"region:us"
] | Tychos | null | null | null | 0 | 0 | ---
license: mit
---
|
stanmalkinson199/CartmanClassico | 2023-10-07T17:58:47.000Z | [
"license:openrail",
"region:us"
] | stanmalkinson199 | null | null | null | 0 | 0 | ---
license: openrail
---
|
sddsdsf/vv | 2023-10-08T12:36:05.000Z | [
"license:other",
"region:us"
] | sddsdsf | null | null | null | 0 | 0 | ---
license: other
license_name: vvcc
license_link: LICENSE
---
|
Rabnawaz/King | 2023-10-07T18:13:32.000Z | [
"license:apache-2.0",
"region:us"
] | Rabnawaz | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
BangumiBase/freeeternalsummer | 2023-10-07T19:53:45.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Free! -eternal Summer-
This is the image base of bangumi Free! -Eternal Summer-, we detected 24 characters, 2471 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 411 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 274 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 32 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 105 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 215 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 37 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 23 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 45 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 284 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 36 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 54 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 36 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 9 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 238 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 19 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 306 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 11 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 118 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 12 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 14 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 14 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 37 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 5 | [Download](22/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 136 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Hack90/ncbi_genbank_part_11 | 2023-10-07T18:31:48.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 1891275103
num_examples: 169329
download_size: 841760730
dataset_size: 1891275103
---
# Dataset Card for "ncbi_genbank_part_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_73 | 2023-10-07T18:51:59.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31446287535
num_examples: 1129212
download_size: 14015101306
dataset_size: 31446287535
---
# Dataset Card for "ncbi_genbank_part_73"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/ilsolepenetraleillusioni | 2023-10-07T19:58:38.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Il Sole Penetra Le Illusioni
This is the image base of bangumi il sole penetra le illusioni, we detected 26 characters, 1875 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 82 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 144 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 47 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 26 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 27 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 41 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 18 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 12 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 11 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 73 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 17 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 152 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 11 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 14 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 75 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 12 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 206 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 60 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 16 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 12 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 28 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 53 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 8 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 29 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 678 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
matthewfranglen/aste-v2 | 2023-10-09T10:05:10.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"arxiv:1911.01616",
"arxiv:2107.12214",
"arxiv:2010.02609",
"region:us"
] | matthewfranglen | null | null | null | 0 | 0 | ---
pretty_name: "Aspect Sentiment Triplet Extraction v2"
language:
- en
arxiv:
- 2107.12214
- 2010.02609
- 1911.01616
size_categories:
- 1K<n<10K
task_categories:
- token-classification
- text-classification
configs:
- config_name: 2014-laptop-sem-eval
data_files:
- split: train
path: "data/2014/laptop/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/laptop/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/laptop/sem-eval/test.gz.parquet"
- config_name: 2014-laptop-aste-v2
data_files:
- split: train
path: "data/2014/laptop/aste/train.gz.parquet"
- split: valid
path: "data/2014/laptop/aste/valid.gz.parquet"
- split: test
path: "data/2014/laptop/aste/test.gz.parquet"
- config_name: 2014-restaurant-sem-eval
data_files:
- split: train
path: "data/2014/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/sem-eval/test.gz.parquet"
- config_name: 2014-restaurant-aste-v2
data_files:
- split: train
path: "data/2014/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/aste/test.gz.parquet"
- config_name: 2015-restaurant-sem-eval
data_files:
- split: train
path: "data/2015/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/sem-eval/test.gz.parquet"
- config_name: 2015-restaurant-aste-v2
data_files:
- split: train
path: "data/2015/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/aste/test.gz.parquet"
- config_name: 2016-restaurant-sem-eval
data_files:
- split: train
path: "data/2016/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/sem-eval/test.gz.parquet"
- config_name: 2016-restaurant-aste-v2
data_files:
- split: train
path: "data/2016/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/aste/test.gz.parquet"
---
## Dataset Description
### Task Summary
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
This task is firstly proposed by (Peng et al., 2020) in the paper [Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)](https://arxiv.org/abs/1911.01616).
For Example, given the sentence:
> The screen is very large and crystal clear with amazing colors and resolution .
The objective of the Aspect Sentiment Triplet Extraction (ASTE) task is to predict the triplets:
> [('screen', 'large', 'Positive'), ('screen', 'clear', 'Positive'), ('colors', 'amazing', 'Positive'), ('resolution', 'amazing', 'Positive')]
where a triplet consists of (target, opinion, sentiment).
### Dataset Summary
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint.
The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service).
By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect.
This dataset consists of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect.
### Dataset Source
The ASTE dataset is from the [xuuuluuu/SemEval-Triplet-data](https://github.com/xuuuluuu/SemEval-Triplet-data) repository.
It is based on the Sem Eval 2014, 2015 and 2016 datasets, with some preprocessing applied to the text.
* [Sem Eval 2014 Task 4](https://alt.qcri.org/semeval2014/task4/)
* [Sem Eval 2015 Task 12](https://alt.qcri.org/semeval2015/task12/)
* [Sem Eval 2016 Task 5](https://alt.qcri.org/semeval2016/task5/)
### Dataset Details
The train, validation and test splits come from the ASTE dataset.
There are the following columns:
* index
The ASTE and Sem Eval datasets had multiple annotations per document.
This dataset has a single annotation per row.
To make it easier to collect all annotations for a document the index can be used to group them.
All annotations for a given document will have the same index.
* text
This is the document that is annotated, either in the ASTE form or in the Sem Eval form (see below for details).
* aspect_start_index
The zero based character index for the first letter of the aspect term
* aspect_end_index
The zero based character index for the last letter of the aspect term
* aspect_term
The aspect term as it appears in the text
* opinion_start_index
The zero based character index for the first letter of the opinion term
* opinion_end_index
The zero based character index for the last letter of the opinion term
* opinion_term
The opinion term as it appears in the text
* sentiment
The sentiment class for the opinion about the aspect.
One of _negative_, _neutral_ or _positive_.
The ASTE dataset involved preprocessing the SemEval text.
This preprocessing fixed some of the spelling mistakes, for example:
> Keyboard good sized and wasy to use.
(easy misspelt as wasy).
The preprocessing also includes tokenization of the text and then separating the tokens with whitespace, for example:
> It 's just as fast with one program open as it is with sixteen open .
Since the added whitespace can lead to unnatrual text I have provided two forms of the dataset.
Subsets that end with `aste-v2` have the preprocessed text with spelling correction and additional whitespace.
Subsets that end with `sem-eval` have the original Sem Eval text.
### Citation Information
```
@misc{xu2021learning,
title={Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Yew Ken Chia and Lidong Bing},
year={2021},
eprint={2107.12214},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{xu2021positionaware,
title={Position-Aware Tagging for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Hao Li and Wei Lu and Lidong Bing},
year={2021},
eprint={2010.02609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{peng2019knowing,
title={Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis},
author={Haiyun Peng and Lu Xu and Lidong Bing and Fei Huang and Wei Lu and Luo Si},
year={2019},
eprint={1911.01616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Hack90/ncbi_genbank_part_46 | 2023-10-07T19:46:27.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 45473431595
num_examples: 198370
download_size: 20050383599
dataset_size: 45473431595
---
# Dataset Card for "ncbi_genbank_part_46"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_74 | 2023-10-07T19:24:02.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 33100376103
num_examples: 414925
download_size: 14899366001
dataset_size: 33100376103
---
# Dataset Card for "ncbi_genbank_part_74"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ummagumm-a/colorization_dataset | 2023-10-07T19:22:39.000Z | [
"region:us"
] | ummagumm-a | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
sequence:
sequence:
sequence: uint8
- name: text
dtype: string
splits:
- name: train
num_bytes: 333261193.0
num_examples: 1000
download_size: 127051514
dataset_size: 333261193.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "colorization_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jjisd/sdxlcheckpoints | 2023-10-08T03:17:56.000Z | [
"license:other",
"region:us"
] | jjisd | null | null | null | 0 | 0 | ---
license: other
license_name: vvcc
license_link: LICENSE
---
|
M-A-D/Mixed-Arabic-Dataset-Main-Test | 2023-10-07T19:17:56.000Z | [
"region:us"
] | M-A-D | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: GenId
dtype: int64
- name: SubId
dtype: int64
- name: DatasetName
dtype: string
- name: DatasetLink
dtype: string
- name: Text
dtype: string
- name: MetaData
struct:
- name: AboutAuthor
dtype: 'null'
- name: AboutBook
dtype: 'null'
- name: Author
dtype: 'null'
- name: AuthorName
dtype: 'null'
- name: BookLink
dtype: 'null'
- name: BookName
dtype: 'null'
- name: ChapterLink
dtype: 'null'
- name: ChapterName
dtype: 'null'
- name: Tags
dtype: 'null'
- name: __index_level_0__
dtype: float64
- name: created_date
dtype: string
- name: deleted
dtype: bool
- name: detoxify
dtype: 'null'
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: id
dtype: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: lang
dtype: string
- name: message_id
dtype: string
- name: message_tree_id
dtype: string
- name: model_name
dtype: 'null'
- name: parent_id
dtype: string
- name: query_id
dtype: 'null'
- name: rank
dtype: float64
- name: review_count
dtype: float64
- name: review_result
dtype: bool
- name: role
dtype: string
- name: synthetic
dtype: bool
- name: title
dtype: 'null'
- name: tree_state
dtype: string
- name: url
dtype: 'null'
- name: user_id
dtype: string
- name: ConcatenatedText
dtype: int64
splits:
- name: train
num_bytes: 96491917
num_examples: 71935
download_size: 37192033
dataset_size: 96491917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mixed-Arabic-Dataset-Main-Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/zombielandsagarevenge | 2023-10-07T20:45:00.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Zombie Land Saga Revenge
This is the image base of bangumi Zombie Land Saga Revenge, we detected 36 characters, 2401 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 127 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 86 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 40 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 80 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 18 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 12 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 61 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 60 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 35 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 40 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 61 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 58 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 31 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 43 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 22 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 13 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 5 | [Download](17/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 18 | 217 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 46 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 229 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 40 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 87 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 18 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 20 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 57 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 21 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 13 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 196 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 49 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 30 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 92 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 184 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 8 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 8 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 284 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BySwax/JeanPormanove | 2023-10-07T19:21:00.000Z | [
"region:us"
] | BySwax | null | null | null | 0 | 0 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.