File size: 994 Bytes
2fc866f a22463f 5d19c88 a22463f 5d19c88 a22463f 5d19c88 2fc866f 28f3655 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sequence
dtype: string
- name: length
dtype: int64
splits:
- name: train
num_bytes: 12071713186
num_examples: 41546293
- name: valid
num_bytes: 24293086
num_examples: 82929
- name: test
num_bytes: 19981814
num_examples: 48941
download_size: 11690105266
dataset_size: 12115988086
---
# Uniref50: Uniref Sequences clustered at 50% sequence identity
- ~40M Protein Sequences.
- Split into train val and test.
# Usage
```
from datasets import load_dataset
# Step 1: Load the dataset from HuggingFace Hub
dataset = load_dataset("zhangzhi/Uniref50")
# Step 2: Access a specific split (e.g., "train", "validation", "test")
train_split = dataset["train"]
print(f"Number of sequences in the train split: {len(train_split)}")
``` |