File size: 6,605 Bytes
b938cef a522114 b938cef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
---
license: cc-by-4.0
tags:
- biology
---
# UniRef50 (Processed, ESM-valid as Validation)
## Dataset Summary
This dataset is a **preprocessed UniRef50** snapshot tailored for **unsupervised protein representation learning**. It:
* Normalizes sequences (uppercase, `*` removed), filters by length and ambiguity, and deduplicates by MD5.
* Splits by **UniRef50 cluster ID** to prevent leakage.
* Uses the **official ESM validation headers** as the entire `valid` split (no sampling).
* Provides **JSONL.zst shards** for efficient streaming with 🤗 `datasets`.
> If you need the exact preprocessing script: see **Reproducibility** below.
---
## Source
* **Upstream data:** UniProt / UniRef50 (2018_03 snapshot).
* **Evaluation headers:** `uniref201803_ur50_valid_headers.txt` from the ESM paper.
Please respect UniProt terms when using or redistributing this derivative dataset.
---
## Splits
| Split | Definition | Notes |
| ------- | ---------------------------------------------------------------- | ----------------------------------------- |
| `train` | All clusters **not** in ESM valid and not hashed into test | Majority of UniRef50 |
| `valid` | **Only** clusters in ESM’s validation header list | Field `is_esm_valid=true` for all records |
| `test` | Hash‐based holdout by cluster: `xxhash64(cluster_id) % 100 == 2` | Small random holdout |
> Splitting by **cluster_id** avoids train/val/test contamination across cluster members.
---
## Features (Schema)
| Field | Type | Description | |
| -------------- | ------- | ---------------------------------------------------------------- | -------- |
| `id` | string | Stable ID = `cluster_id | md5[:8]` |
| `sequence` | string | Normalized AA sequence (uppercase; `*` removed) | |
| `length` | int32 | Sequence length after normalization | |
| `cluster_id` | string | UniRef50 cluster ID (e.g., `UniRef50_Q8WZ42-5`) | |
| `description` | string? | Optional description parsed from FASTA header (after `Cluster:`) | |
| `seq_md5` | string | MD5 of normalized sequence | |
| `is_esm_valid` | bool | `true` iff the record belongs to the ESM validation header set | |
> Ambiguous residues: records with ambiguity fraction > 5% (non-canonical AAs) are filtered out by default.
---
## Preprocessing & Filters
* **Normalization:** uppercase, remove terminal/internal `*`.
* **Length filter:** keep `30 ≤ L ≤ 1024`.
* **Ambiguity filter:** keep sequences with ≤ **5%** non-canonical residues (`ACDEFGHIKLMNPQRSTVWY` are canonical).
* **Deduplication:** exact dedup by MD5 of normalized sequence (global).
* **Splitting:** by `cluster_id` as described above.
* **Headers:** FASTA lines like
`>UniRef50_Q8WZ42-5 Cluster: Isoform 5 of Titin` → `cluster_id="UniRef50_Q8WZ42-5"`, `description="Isoform 5 of Titin"`.
---
## Intended Use
* **Self-supervised training** of protein LMs/encoders that must be robust to substitutions and indels (e.g., OT/UOT objectives).
* **Evaluation** aligned with the ESM paper by using the official validation header set for `valid`.
Not intended for clinical use. No personal data.
---
## How to Load (Streaming & Local)
### Streaming (recommended for large shards)
```python
from datasets import load_dataset
repo = "DeepFoldProtein/uniref50_processed" # replace with your namespace
ds_train = load_dataset(repo, split="train", streaming=True)
row = next(iter(ds_train))
print(row["cluster_id"], row["length"])
```
### Extract ESM-valid subset (within `valid`)
```python
from datasets import load_dataset
ds_valid = load_dataset(repo, split="valid", streaming=True)
esm_valid = ds_valid.filter(lambda x: x["is_esm_valid"])
print(next(iter(esm_valid)))
```
### Non-streaming load (small splits only)
```python
from datasets import load_dataset
ds_test = load_dataset(repo, split="test") # materializes locally
print(len(ds_test))
```
---
## Quick Stats Helper
Use this helper to print length statistics per split:
```python
from datasets import load_dataset
import math
def stats(split):
ds = load_dataset("DeepFoldProtein/uniref50_processed", split=split, streaming=True)
n=s=s2=0; mn=10**9; mx=0
for r in ds:
L = int(r.get("length", len(r["sequence"])))
n += 1; s += L; s2 += L*L; mn = min(mn, L); mx = max(mx, L)
mean = s/n if n else float("nan")
std = math.sqrt(max(0.0, s2/n - mean*mean)) if n else float("nan")
return {"count": n, "min": mn, "max": mx, "mean": mean, "std": std}
print(stats("train"))
print(stats("valid"))
print(stats("test"))
```
---
## Licensing
* **Data source:** UniProt / UniRef50. Follow the UniProt license and attribution requirements: [https://www.uniprot.org/help/license](https://www.uniprot.org/help/license)
* **Derivative dataset:** You must attribute UniProt and include a link to their license when redistributing.
* **Code (preprocessing):** Provide your own license for the script if you distribute it.
---
## Citation
If you use this dataset, please cite UniProt and (optionally) ESM:
**UniProt:**
> The UniProt Consortium. *UniProt: the universal protein knowledgebase.* Nucleic Acids Res. (2018)
**ESM:**
> Rives et al. *Evolutionary-scale prediction of atomic-level protein structure with a language model.* Science (2023).
---
## Known Limitations
* **Snapshot drift:** This mirrors UniRef50 (2018_03) conventions; later UniRef releases may differ.
* **Non-random validation:** `valid` is defined by ESM’s curated header list (by design).
* **Ambiguity handling:** Sequences with >5% ambiguous residues are dropped; adjust if you need broader coverage.
* **Dedup scope:** Deduplication is by normalized sequence only (not by cluster consensus).
---
## Changelog / Versioning
* **v1.0:** Initial release — ESM-valid set defines `valid`; hash-based `test`; JSONL.zst shards; manifest schema above.
* Future updates will be tagged with semantic versions and described here.
---
## Contact
* **Issues:** Please open a GitHub issue or HF discussion on this dataset repo.
---
If you’d like, I can also generate a minimal `dataset_info.yaml` with this schema so the Hub shows the features immediately. |